Posté par: johnnytomasiello Il y a 2 années, 4 mois
Introduction
A System for the Synchronous Emergence of Music Derived from Movement is an immersive audio and visual work whose purpose is to explore an intentional relationship between the movement of an artist’s hand (brush or pen, etc.) and a generative interactive computer-assisted compositional and performance system that is directly informed by those movements, in real-time.
History
This project was initially designed for an artist who creates kinetic visual artworks (digital paintings) in a live setting, in collaboration with performing musicians. At first, the idea was to offer the artist a self contained generative music system, granting them independence as an element of the work, while allowing complete focus on their visual practice.
The concept evolved to incorporate how a visual artist’s process was, or could be, influenced by musical feedback, and how a reciprocally responsive real-time generative system could affect the outcome of a visual piece, made by any user.
This work builds on the experience I gained with my previous project Moving Towards Synchrony. That is another immersive work, whose purpose is to explore the reciprocal relationship between electrical activity in the brain and external stimuli that has been generated -and defined by- those same physiological events. I invite you all to see the presentation from last year’s IRCAM forum on the work.
Moving Towards Synchrony investigates the neurological effects of modulating brain waves and their corresponding physiological effects through use of a Brain-Computer Music Interface, which allows for the sonification of the data captured by an electroencephalogram which is translated into musical stimuli in real time.
The work explores the validity of using the scientific method as an artistic process. The methodology is to create an evidence-based system for the purpose of further developing research based projects. It focuses on quantifying measurable and repeatable physiological and psychological changes in a user using a nurofeedback loop, requiring that user to concentrate on the stimuli to the exclusion of any other activity or action.
This system, in contrast, is concerned with staying in the process, and demands active mental and physical engagement from the user, who is influencing, and responding to, the external stimuli that has been defined by the fundamental physical gestures already in use in the visual arts practice. What is being investigated is how the choices of a visual artist may be influenced by a generative music system that is based on their physical movements, and to what extent the artist will allow that. Conversely, it is also possible that the user may choose not to let the feedback influence their movements, or any combination of those possibilities, at any time, during the work. It emphasizes the personal intuitive rules and decisions used while making improvisational choices, bridging that gap between the purely scientific focus of the previous project and the art practice of the user that is emphasized in this piece. It is instinctual and malleable versus quantifiable and physiological.
Method
The project employs wearable gestural technologies, collecting and translating movement data through a non invasive MUGIC sensor, tracking motion on the pitch, yaw, and roll axis. The melodic and rhythmic content are derived from, and constantly influenced by, the user’s movements. The musical performance and scales are directly influenced by hand orientation and movement.
The data are used to generate real-time, interactive music compositions, which are simultaneously experienced by the user, influencing choices, and ultimately the final visual works, while also presenting a live immersive audio and visual piece.
The data is sonified though a patch built in Max 8, which manages the sound generation and DSP, and most importantly, is where the translation from movement to music is defined.
There are 3 main sections of this Max project:
1: The sensor data capture section.
2: The data conversion section.
3: the sound generation and DSP section.
The sensor data capture section receives movement data from the MUGIC sensor, which sends the information in OSC protocol over a WiFi connection. That data is then split into the three separate sets: yaw, pitch, and roll.
The data conversion section accepts the formatted sensor data and translates it to musical events. First, significant thresholds for each movement axis are defined and calibrated for. These are chosen based on average gestural ranges taken prior to the use of the musical feedback. When those thresholds are reached or exceeded, an event is triggered. Depending on the mappings, those events can be one or more of several types of operations: the sounding of a note, a change in pitch or scale or mode, note values and timings, and/or other generative performance characteristics. The time-base for the musical events can be variable and based on hand movements, or set to a clock. Any of these mappings or threshold decisions can be easily changed to accommodate a different thesis or set of standards.
The third section is sound generation and DSP. It is responsible for the sonification of the data translated from the EEG data conversion section. This section includes synthesis models, timbral characteristics, and spatial effects.
This projects uses three synthesized voices created in Max 8 for the generative musical feedback.The timbral effects employed are waveform mixing, frequency modulation, and low pass filtering. The spatial effects used include reverberation, and delay. In addition to the initial settings of the voices, each of the timbral effects are modulated by separate event data captured by the wearable sensor.
Contact Details
Johnny Tomasiello
johnnytomasiello@gmail.com
Commentaires
Pas de commentaires actuellement
Nouveau commentaire