Posté par: Jan Ove Hennig Il y a 2 mois
Traditionally, the location from which a note emanates is inherently connected to the position of the instrument it was produced by. For example, the timbre of the violoncello in a classical orchestra is married to its defined position in the sound stage to the right of the conductor. With the piano, notes played in the lower register are produced to the left of the player, and higher notes further to the right. When it comes to determining the position of a sound that is fundamentally position-less (such as a digitally generated waveform), spatialization is often considered an additional aspect of the performative process. This is further reinforced by the lack of standardized interfaces for providing control over spatial aspects on the same intuitive level that traditional musical instruments offer over pitch, amplitude, duration and timbre.
As an alternative, "SDI" is distributing the instrument in space by installing identical instances of the same sound-generating device, and then using pre-determined rules set by the performer that address these devices in real time. This stands in contrast with the established practice of controlling the levels of specific sound sources played back through a system of speakers, while the rule-based approach also sets it apart from stochastic processes.
Electrodynamic exciters are used to turn the vibrating objects into representations of the instrument itself. Through the use of identifiably sources from which the sound emanates, the spatial aspect of artistic performance can be appreciated on an intuitive level by the audience. Selecting and modifying the objects that vibrate become integral aspects of the performance, contrary to conventional spatialisiation practices in electro-acoustic music where the loudspeakers are required to precisely reproduce the intent of the composer or performer without coloring the sound.
On a technical level this is realized with the help of compact devices built around a Raspberry Pi running a RNBO patch. They are being addressed by Max/MSP over UDP making the communication between the performance interface and the sound generators reliable and nearly instant. In contrast to existing multi-speaker configurations where the sound is generated by a central instrument or playback device and then reproduced in different positions of the physical space, SDI only sends messages to the individual instances of the instrument, where they are then converted into sound.
In summary, SDI puts the performer in a position to control multiple destinations from a single point of origin without having to assign spatial position as a separate parameter, opening new doors for programmatic ways of integrating real-time spatialisation into performances.
Commentaires
Pas de commentaires actuellement
Nouveau commentaire