Posté par: Lewis Wolstanholme Il y a 1 mois, 4 semaines
Presented by : Lewis Wolstanholme
This installation explores spatialised deconstructions of timbre, and creates immersive transitions between different sonic materials using the Joint Time-Frequency Scattering transform (JTFS). The JTFS transform produces a multi-dimensional representation of audio by analysing and disentangling the spectrotemporal modulations present within sound, such as frequency modulations,amplitude modulations, and pitch. The JTFS transform has been shown to closely model how our brain interprets modulatory changes in sonic materials due to the relationship between the wavelets employed by the JTFS and the neurophysiology of the auditory cortex. Using this technique, it is possible to design an iterative resynthesis algorithm, utilising machine learning techniques and gradient descent, which can be used to distort, crossfade, and reshape the form of musical and sonic materials during composition.
For this work, various recordings and musical fragments have been stitched together to create an immersive, textural and seamlessly evolving sonic palette. To achieve this, the JTFS resynthesis technique has been utilised to artificially create long-form passages of audio that demonstrate a transition from one sonic fragment to another. By rendering the audio produced at every step during the gradient descent process, it is possible to portray the inner workings of this resynthesis technique, and create passages of audio that emphasise the transitory process from one source material to another. The products of this resynthesis technique are then spatialised relative to their underlying spectrotemporal modulations and pitch. This spatialisation process creates an atmospheric sonic landscape which highlights the modulatory characteristics of a sound at distinct locations within a space.
This work has been produced in collaboration with the technologist Christopher Mitcheltree, who has been developing a new approach towards employing the JTFS transform during the creative process. Christopher is a PhD researcher at the Centre for Digital Music at Queen Mary University of London, and is also a founding engineer of Neutone: a neural audio plugin, open-source SDK, and community that helps bridge the gap between audio researchers and artists. This work also builds upon many of the techniques originally presented in the 2023 AES paper ‘Hearing from Within a Sound’.
Partager sur Twitter
Partager sur Facebook
Commentaires
Pas de commentaires actuellement
Nouveau commentaire