Presented by : Matthias Krüger
The presentation deals with the methodology of composing with motion sensors and crossing the bridge to video production, as well as the aesthetic implications and potentials thereof.
Based on my piece « rosebud » for dancer, sensors and live-electronics, composed for the final concert of the 2021/22 Cursus (dance/choreography: Victor Virnot), which used the Bitalio/R-ioT IMU sensors (later the NGIMU by x-io Technologies), I developed a new videoclip in which the motion sensor data is directly translated to digital VFX.
This project follows my general concern with "hybrid composition", which not only means an transdisciplinary approach on a performance level, but also the total imbrication and structural interdependence between each layer: Just like a piece of instrumental acoustic music cannot exist without the motions of the instrumentalist, here as well the music cannot be shaped in the desired way without shaping the motions required to produce that very same shape. So neither can exist without the other; the choreography becomes the score, and vice versa.
But it also means this: The piece doesn't need to end with its live performance, but can also exist as an alternative non-live version.
Initially thought from a sustainibility and longevity perspective of a "work of performance art", making sure future viewings of the piece beyond the rare physical performance opportunities, it entails a certain openness of the finished form of the piece, making it co-exist in several equivalent versions, for example a live performance and a video version, leaving the question unanswered which one is the original version.
On the process level, this openness may also create an iterative and reciprocal relationship between certain compositional decisions: Whereas choreographical needs may, for example, impose a certain temporal structure to the music, the timing, certain camera perspectives, lighting for filming, or post-production decisions (VFX, color grading) may impact the interpretation and mise-en-scene of a subsequent live performance.
So: What is the piece? The live performance? Or the video clip? And is the video clip a representation of the live performance?
❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎
With his in mind, the basic concept meant above all to conserve the specific physical aura of the live dance performance in an audio/video recording. The path chosen for this to recreate the relationship between the dancer and the dominant medium: Just as he controls the music live and in real-time through his movements during the performance, his movements will also control the exact shape and inherent energy of the video picture. For this, a limited set of VFX was chosen in order to distort and enhance the video image.
The original live version, premiered in September 2022 at Centre Pompidou in Paris, already featured a max/msp patch which mapped specific motion and contact sensor parameters onto musical and compositional parameters.
In a revision, rehearsal and filming residency at GMEM in Marseille in January 2024, the same setup was kept whilst the music and choreography compositionally developed; then we not only recorded the video footage for the videoclip (photography by Zoë Schreckenberg), but also the sensor data along with the audio.
This had two purposes:
- Having precisely matched audio cues to create an audio track exactly synchronized with the images
- the patch would be run remotely and as to record/export one single audio file a posteriori in order to have an audio track without any edits
- This worked particularly well in "rosebud" because there is no audio input (no voice, no instruments); all sounds originate directly in the computer (different kinds of synthesis, FX, etc.).
- Needless to say that this audio track is based on a performance that—in that exact shape and form—has never actually taken place, but is rather like a "Frankenstein"-like assemblage of data sets creating a performance that resembles one that could have taken place and is, in fact, already being simulated by the illusion that is the video edit.
- the patch would be run remotely and as to record/export one single audio file a posteriori in order to have an audio track without any edits
- Having matching data streams mirroring the movements in the video, in order to map those to digital VFX processing the video images in post-production, translating the kinetic energy of the live performance to the video and thus transcending a mere video representation of the live performance.
Basing myself on the video edit, done according to visual and videographical premises such as consistency and continuity, as well as general aesthetic choices, and loosely based on the audio recording of one of several run-throughs, I subsequently had to match up the sensor data to the thusly assembled video takes.
Whilst in the live performance the motion sensor data is formatted in OSC, it had to simulatenously be output during the recording session as MIDI in order to oobtain a controllable and editable visual representation of the data as well as a format in which it is can be loaded into in the same editing environment as audio and video (such as a DAW). Transitions between different MIDI takes had to be smoothed manually in order to create the illusion of continuity for which our eyes are more forgiving than our ears.
Together with the Toronto-based multimedia artist Michaelias Pichlkastner was then devised a methodology to apply this data to a selection of VFX, creating the aforementioned conceptual correspondence between the music and video layers. Using the software/programming environment VVVV—processing data like max/msp in real-time—, we could select and map specific MIDI events (representing the sensor data) and have them control the VFX (playing back both video source and MIDI file synchronously).
Furthermore, there was one unexpected, yet postitive side effect that turned out to be of utmost importance:
- Having data input, practically a "virtual avatar" of the dancer, enabled me to perfect the piece, debug the patch and mix it all without needing a live performer to reiterate the movements necessary for trials and testing.
- This turned out to be the probably most important aspect of the process: The creation of this "avatar" of the performer, effectively "downloading and storing their performance and movements", enabling me countless playbacks of the same data sets without all tribulations this would entail for a human performer.
This almost arborescent working process, starting out with the idea of mapping movements onto digital VFX, has ended up leading to many new applications of its methodology as well as altering the exact form of the live performance/version, making it rather rhizomatic. The compositional authority thus is beginning to be blurred and shifting away from the composer and towards the dancer (of course), and the editor of the video.
❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎
This methodology is seems extremely promising and will be further explored in future projects. On top of all things already explored in "rosebud", another possible application of these principles might open up possibilities for "4D-like renditions" of the video versions of future pieces: Where sensors or computers control appliances like ventilators or other sensorially perceptible devices, thusly recorded, synchronized and edited data lanes (MIDI, DMX, etc.) may constitute additional playback tracks for augmented audio/video projections of what otherwise would be just a regular film screening.
❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎
“rosebud” – hybrid composition for dancer, sensors and live-electronics (multi-channel audio), existing in alternative versions as both a live version and as a video clip
Video clip (HD, 1920x1080/1080p) for projection (wall/screen). Audio available in Stereo (2.0, 2.1) or multi-channel (4.0, 4.1, 5.1, or custom [4+]).
Matthias Krüger (Paris/Hamburg), direction/composition/co-choreography/electronics/video editing/mixing
Victor Virnot (Paris), dance/choreography
Zoë Schreckenberg (Darmstadt), director of photography
Lukas Ipsmiller (Vienna/Athens), lighting and camera assistant
Rikisaburo Sato (Cologne), color grading
Michaelias Pichlkastner (Toronto/Vienna), Digital VFX
Music produced and mixed at IRCAM (Paris, 2022/2025), GRAME (Lyon, 2023), GMEM (Marseille, 2024) and CIRMMT (Montreal, 2024).
Filmed at GMEM, Marseille on January 26, 2024.
Supported by:
- Kunststiftung NRW (Düsseldorf; Arts Foundation of Northrhine-Westfalia)
- Impuls Neue Musik (Berlin)
- Dr. Christiane Hackerodt Kunst- und Kulturstiftung (Hannover)
- Centro Tedesco di Studi Veneziani (Venice)
- Le Vivier - Carrefour des Musiques Nouvelles (Montreal)
- GRAME (Lyon, Centre National de Création Musicale)
- GMEM (Marseille, Centre National de Création Musicale)
- x-io Technologies (Bristol)