Dynamic Spatial Mixing for Multi-Channel Audio by Aleksandar Zecevic and Kiran Bhumber

This live demo introduces a system for dynamic spatialization and mixing in multi-channel environments using Max/MSP and IRCAM’s SPAT library.

➡️ This presentation is part of IRCAM Forum Workshops Paris / Enghien-les-Bains March 2026

The framework combines adaptive spatial rendering with dynamic mixing to create an evolving, responsive sonic field. Using 3D positional data from sound sources and the listener’s perspective, the system selects or combines multiple spatial rendering methods in real time. Audio objects dynamically influence one another and interact with static beds, generating shifting amplitude and frequency relationships and establishing priority-based behavior between moving and static spatial elements. Designed for flexibility, the system can quickly adapt to different loudspeaker configurations and venue types, and also provides parallel binaural rendering for headphone monitoring and remote demonstration, making it suitable for both fixed installations and live performance contexts.