Sympoietic Being, full dome film (Zeiss Großplanetarium 2024 and Sous Dôme Festival 2025)

The project investigates the integration between Spat5 and Unreal Engine 5. Developed through long-distance collaboration (Berlin-Paris), it bridges immersive audio design, real-time visual interaction, and conceptual aesthetics drawn from posthumanist theory.

Click here for full film in rectangular format and binaural decoding (redirected to vimeo.com)


Relational Aesthetics

This project coincided with a recent paper I wrote about Nicolas Bourriaud’s Inclusions: Aesthetics of the Capitalocene, exploring issues related to posthumanism and the deconstruction of narratives inherited from the Western Renaissance. In his text, Bourriaud examines the evolving role of art amid ecological crises and proposes a "molecular anthropology," focusing on the interactions between human and nonhuman entities. Simultaneously, my collaborator was investigating minerals and creating 3D projections. Both of us agreed on the concept of a crystal as the core narrative element, tracing its journey through a hyperdimensional cave and, ultimately, out into the open world.

One of the aesthetic concepts of the piece is the use of mirrors, inspired by Robert Morris—an artist discussed in Bourriaud's book—and his minimalist mirror artworks. Observing Morris’s pieces, particularly Untitled (Williams Mirrors) (1967) and Strike (2012), sparked the desire to explore similar visual phenomena within the Unreal Engine environment.

Mirror dimension process film click here (redirected to Vimeo.com)

With the aforementioned aesthetic framework in mind, incorporating text into the second scene served to further solidify the conceptual grounding of the piece. Its intention was to offer the listener a space for sonic and visual contemplation, as well as conceptual engagement with the visual environment. Throughout the scene, viewers are immersed in a tesseract-like dimension, where objects reflect across mirrors, creating the illusion of infinity. In this sense, the voice makes good use of the Spat5 HOA convolution reverb tool ( spat5.hoa.conv~ ), highlighting the idea of the apparent massiveness of the ‘mirror-cave.’


Screenshot of the film in rectangular ratio

Screenshot of the film in rectangular ratio


Carving the Stone

The 3D environment of this project was mostly built in Unreal Engine 5, with certain frames and objects created using Blender. The piece includes several 3D scans of minerals like amethyst and quartz, alongside reflective and translucent materials made within UE5. Arthur Wardenski primarily handled the visual design and camera animation, refining aspects like lighting, color, layout, and modeling of the cave and the outer environment of the cave. I worked on the visuals of the tesseract scene (2nd scene)—its 3D design, camera movements, and material choices (reaching for the infinity mirror by Morris). This scene played a key role in the technical development of the piece, as it provided an experimental ground for the transmission of coordinates of sound objects created in Spat5, as for level information translated in light behavior in spheres and crystals in UE5.


Disapearence of Distance

Given the geographical distance between collaborators, an efficient workflow was essential. With ADM files created using Spat5 and a customized version of the ADM recorder patch, Blueprints (Unreal Engine's patching environment) were developed to efficiently and dynamically assign OSC data to properties such as brightness, light color, and spatial coordinates to visual objects called Actors. It was crucial to precisely identify the necessary range of data and format, filtering out any additional information that might slow down the workflow and intercommunication between platforms. The filtering of data was restrained to: Peak level per sound object and Cartesian coordinates.

Given the geographical distance between collaborators, an efficient workflow was essential. With ADM files created using Spat5 and a customized version of the ADM recorder patch, Blueprints (Unreal Engine's patching environment) were developed to efficiently and dynamically assign OSC data to properties such as brightness, light color, and spatial coordinates to visual objects called Actors. It was crucial to precisely identify the necessary range of data and format, filtering out any additional information that might slow down the workflow and intercommunication between platforms. The filtering of data was restrained to: Peak level per sound object and Cartesian coordinates.




The Sounds of the Hypercave

Initially, a multichannel (mc.) FM synthesizer was developed in MaxMSP. It was used as a tool to create (or recreate) acoustic phenomena such as sound bouncing off walls or generating early reflections through the careful placement of sound objects and delay matrices ( spat5.delgen ) with logarithmic gain offsets. To achieve an authentic cave- like ambience, impulse responses were captured from narrow spaces around Berlin. Using a first-order Ambisonics microphone, recordings were made in a mid-20th-century bunker in Friedrichshain and in the abandoned Soviet building in Vogelsang Zehdenick. These recordings were then upmixed to fifth-order using the Sparta Ambisonics tools, enhancing spatial resolution—particularly effective for use with reverbs, in this case spat5.hoa.conv~ . The resulting reverberation convincingly evoked the atmosphere of a cave and was extensively employed throughout the piece (in parallel wiring), especially in processing vocals, acoustic sources, and field recordings. Additionally, re-amped field recordings were used, re-recorded in FOA, and upmixed to HOA as well. This process further contributed to an overall sense of depth. Granular processing of glass and cello sounds provided textural contrast against the softer backdrop of re-amped water droplets.

For literal immersion, the underwater scene's soundscape was created using hydrophone recordings, duplicated and modulated to convey a sensation of moving beneath the ocean’s surface. These sounds were placed in the acoustic space via sound objects within Spat5. A snapshot recording system was implemented for simultaneous automated movement across 16 sources, integrating ICST tools.

The decoding process was more complex, but from a technical perspective, it yielded highly effective results. Generous input from Thibaut Carpentier contributed to the development of a solution tailored specifically for both spaces (Zeiss and Cité des Sciences). This involved energy-preserving decoding (EPAD), adjusting the speaker coordinates of the dome to the listeners’ zero-elevation point, and selecting a suitable Ambisonics order for each screening.

Zeiss HOA decoder and Cité des Sciences HOA decoder


Both screenings, in Berlin and Paris, were exciting and very satisfying experiences. Naturally, minor inconsistencies emerged here and there. The most significant difference was between the planetariums themselves: the Paris dome offered superior visual quality but was substantially less precise in spatial resolution due to the lower resolution of its audio system. Additionally, Ambisonics generally favors listeners positioned at the sweet spot, prompting me to consider whether a different panning approach might be more suitable in future projects—such as DBAP or KNN—to reduce reliance on a single optimal listening position.

On the other hand, the initial intention of correlating coordinates of sound objects with visual objects in virtual space also prompted some retrospective considerations. It appeared that, unless presented in a clear and simplified manner, this correlation didn't substantially add unique content beyond emphasizing spatial cues. However, the concept of being enclosed within a space, such as a cave, ultimately provided greater depth to the spatial storytelling, hence providing more useful materiality overall for the experience. These materials, of course, depend heavily on the resolution of the audio system and the listener’s seating arrangement.

Overall, using ADM files was beneficial; however, rendering audio from Spat through the ADM recorder required tweaking and customization to match the specific needs of the patch. Additionally, the real-time nature of the sound-making process complicated rendering for both audio and visuals, as the ADM files needed to be played back while simultaneously capturing data with the UE5 sequencer using dynamic Materials and Blueprints. Nonetheless, the journey toward integrating UE5 proved fruitful, and the interface developed through this process provides opportunities for numerous new artistic projects and audiovisual interactive frameworks.


Photo by Kathiyn Schiedt


Project developed by Vicente Yáñez (sound and visuals) and Arthur Wardenski (design and visuals)

This project was created with the support of Sound Studies and Sonic Arts, Universität der Künste Berlin and DSAA Design et Création Numérique, École Estienne Paris