Posté par: noone_511 Il y a 1 année, 9 mois
In this demo, I will demonstrate how to develop a workflow to create an immersive autogenerative project in VR using Max/MSP and Unreal Engine 5.
Although many have written and shown how to use immersive techniques for asynchronous VR projects, very little can be found on how to set up a real-time immersive VR space. Using EEG and ECG Arduino sensors to generate emotionally adaptive music in Max/MSP and hardware modular synth, and encoding the sounds in HOA with the Max/MSP Spat library, I am going to render the music in real-time in binaural format for the VR headset’s headphones.
The headset tracking data is gathered separately in Max/MSP and UE5 to reduce to a minimum the latency. The generated music would also modify, with the relative data sent via OSC, the 3D environment and Niagara Systems in Unreal Engine 5 through bespoke referencing of Unreal blueprints.
The audience will learn a method to integrate tools for the generativity and spatialisation of sound in real-time in Unreal Engine, to create interactive VR installations that challenge the interaction between the user and the artwork, destabilising the subject-object hierarchy.
Commentaires
Pas de commentaires actuellement
Nouveau commentaire