This project is an Interactive Music System (IMS) that uses Mixed Reality (MR) and Spatial Audio technologies for a multi-track looping session, having every track represented by an "Agent", which in this case is an entity embodied as a sound source and space visualized as a virtual sphere.

Presented by: Pedro Lucas
Biography


 

This project is an Interactive Music System (IMS) that uses Mixed Reality (MR) and Spatial Audio technologies for a multi-track looping session, having every track represented by an "Agent", which in this case is an entity embodied as a sound source and space visualized as a virtual sphere. An agent also owns an autonomous behaviour that allows the transformation of the musical material initially fed by a musician who can perform in real-time with this system, as well as its motion, depending on spatial parameters from the performer and other agents. As it is a multi-track section, the performer can summon as many agents as tracks are created, so eventually, the session will have an artificial swarm interacting with the musician. 

It operates as follows: The performer creates a sound source by playing a musical line on a like-piano MIDI controller and modifying the sound's properties using filters and effects through physical knobs. The central controller, the Core System, includes a looper that records and repeats this musical material, creating a sound source that can be heard and seen in space. This sound source is known as the Musical Agent, and it can be manually moved using the MR headset tracking capabilities. The Spatial Audio system maps the sound source position to the loudspeakers' array (ambisonic system), and the MR headset renders the agent as a colored sphere in the physical space.

Through specific gestures from the MR headset, the agent can be released so that it starts moving autonomously, changing the musical material (but keeping the sound properties) of the loop based on a machine-learning algorithm fed in real-time while the user initially played the musical line. Upon releasing an agent, a new one is instantiated, allowing the user to initialize it with a new loop and then release it again. This process can be repeated several times to generate a multi-track musical session, with each looping track associated with a sphere-like agent travelling around the 3D audio-visual space.

As the agents move freely in the performance area, the user can also move around the physical space. The user can catch released agents to modify the musical loop and the sound properties, then release them once again in this human-machine music interaction.

Back to the event