HEADER: a Harmonized Enveloping Audio Digital Experience Renderer

Abstract for HEADER project

An audio-visual interactive composition was designed and written to explore listener envelopment, defined here as how closely connected listeners felt towards the work. Aspects of user control are also examined, as well as the differences in rendering the same interactive digital signal processing on different audio sources. Thematic content of both the audio and the visuals are discussed in relation to other aspects of the design, as well as the concept of using the internet as a performance tool.

 

This project was designed in Max/MSP and JavaScript and has been implemented over the web utilizing the Web Audio API. A head-tracker controls aspects of the digital signal processing, analyzing the user’s head rotation and position captured via their webcam and using said information to control the number of voices in a harmonizer, the Dry/Wet value of a chorus effect, the sound source position in a binaural field, and the gain of the processed audio.

 

Six etudes were mixed binaurally, written and recorded as one piece with content written to highlight aspects of the design and explore listener connectivity. Visuals were created using WebGL in the Jitter environment of Max/MSP, drawing from Music Information Retrieval (MIR) techniques to further listener connectivity.

 

The project was reviewed by an expert panel consisting of panelists with backgrounds in digital signal processing, interactive multimedia installations, computer music, visual arts, musical listening, and face-tracking technology. Results conclusively show that listeners felt more connected with the audio than in a typical listening experience. Results also indicate that the control aspects of the design are successful, and that the digital signal processing is effective on multiple audio sources.