Presented by: Johnny Tomasiello
Biography
Abstract:
Deriving Synchrony: A Real Time Interactive Brainwave-to-Music Translation Performance System is an immersive work whose purpose is to explore the reciprocal relationship between electrical activity in the brain and external stimuli that has been generated and defined by those same physiological events, through use of a Brain-Computer Music Interface (BCMI) that allows for the sonification of the data captured by an electroencephalogram, which is translated into musical stimuli in real time.
It is an interactive computer-assisted compositional performance system that translates electrical activity in the brain into musical compositions in real time, while allowing the user to exert conscious control over this translation process, and the resulting generative sonic output within a musical framework. It can also teach participants how to affect a positive change in their own physiology by learning to influence the functions of the autonomic nervous system through neuro- and bidirectional feedback.
Introduction:
EEG brainwave data has shown high levels of success in classifying mental states [1], which affect “autonomic modulation of the cardiovascular system” [2], Furthermore, there are existent studies investigating how music can influence a response in the autonomic nervous system. [3] It is with these phenomena in mind that this work was first conceived and developed.
Changes in alpha is what this project is primarily concerned with, since research has shown that stimulating activity within alpha causes muscle relaxation, pain reduction, breathing rate regulation, and decreased heart rate. [4] [5] [6] It has also been used for reducing stress, anxiety and depression, and can encourage memory improvements, mental performance, and aid in treatment of brain injuries.
My previous investigations into this subject matter emphasized exploration and quantification of the neurological effects of modulating brainwaves and their corresponding physiological processes through neuro- and bidirectional feedback as a form of therapy and alpha wave training, where the generative music serves as a real-time neurofeedback system that behaves in a such a way as to encourage optimal brainwave responses, particularly in alpha.
I’ve taken my experience with that research and the data I’ve collected and propose this new iteration. This current system affords the user more active control of the resulting performance and musicality of the generative feedback, resulting in a compositional and performance system whose behavior is defined in a more literal way by the users intentions, while still functioning as a reciprocal neurofeedback system.
So what is being explored here then becomes expanded to include how the addition of the goal-directed behavior of a task-switching paradigm affects cognitive flexibility [7], and how much conscious simultaneous processing is required for a subject to affect the resulting musical feedback. This added dimension considers the neural computations that determine how neurons make firing decisions, which directly determines the brainwave activity that is being measured.
The procedure, when using this work for the exploration of the physiological effects of neuro- and bi-directional feedback, starts with obtaining and comparing two data sets: a control and a therapeutic one. The control set tracks EEG data without utilizing the musical feedback, while the therapeutic set records the data with the feedback.
The research methodology explores how to collect and quantify physiological data through non-invasive neuroimaging, effectively using the subject’s brainwaves to produce real-time interactive soundscapes and compositions that, being simultaneously experienced by the subject, have the ability to alter her/his physiological responses.
The melodic and rhythmic content are derived from, and constantly influenced by, the subject’s EEG readings. A subject focusing on the resulting feedback can attempt to elicit a change in their physiological systems, with the dual goals of directly affecting the musical performance, and achieving the optimum response in alpha.
The resulting physiological responses are recorded and measured to determine the efficacy of using external stimuli to affect the human body both physiologically and psychologically.
In addition to investigating these neuroscience concerns, this project is designed to explore the validity of using the scientific method as an artistic process. The methodology will be to create an evidence-based system for the purpose of developing research-based projects.
As Gita Sarabhai expressed to John Cage “Music conditions one’s mind, leading to ‘moments in [one’s] life that are complete and fulfilled’.” [8]. Music, in this case, can also be used by the mind to condition one’s body.
Information on EEG:
An electroencephalogram (also know as an EEG) is an electrophysiological monitoring method used to record the electrical activity of the brain. A typical adult human EEG signal is between 10 and 100 µV (microvolts) in amplitude when measured from the scalp. It was invented by German psychiatrist Hans Berger in 1929 and research into how brainwaves can be interpreted and modulated started as shortly thereafter. Using an EEG, you are able to directly measure neural activity and capture cognitive processes in real time. Berger proved that alpha waves (also know as Berger waves) were generated by cerebral cortical neurons.
In 1934, English physiologists Edgar Adrian and Brain Matthews first described the sonification of alpha waves derived from EEG data. [9] They found that “non-visual activities which demand the entire attention (e.g. mental arithmetic) abolish the waves; sensory stimulation which demand attention also do so” [10], showing how concentration and thought processes affected activity in the alpha wave frequency range.
The brain wave activity recorded in an EEG is a summation of the inhibitory and excitatory post synaptic potentials that occur across a neuronal membrane. [11]
The measurements are taken by way of electrodes placed on the scalp. The readings are divided into five frequency bands, delineating slow, moderate, and fast waves. The bands, from slowest to fastest are:
Delta, with a range from approximately 1.0Hz–4.0Hz, which signifies deepest meditation or dreamless sleep
Theta, from approximately 4Hz–8Hz, signifying meditation or deep sleep.
Alpha, from approximately 7.5Hz–13Hz, representing quietly flowing thoughts.
Beta, from approximately 13Hz–30Hz, which is a normal waking state.
And
Gamma, from approximately 30Hz–44Hz, which is most active during simultaneous processing of information that engages multiple different areas of the brain.
History of EEG use in music:
Physicist Edmond Dewan began the study of brainwaves in the early 1960s and developed a ‘brainwave control system’. The system detected changes in alpha rhythms which were used to turn lighting on or off. “The light could also be replaced by ‘an audible device that made a beep when switched on’, allowing Dewan to spell out the phrase ‘ I can talk’ in Morse code”. [9] Dewan subsequently met experimental composer Alvin Lucier which inspired the first actual brainwave composition.
Alvin Lucier first performed Music for Solo Performer in 1965. It involved the composer sitting in a chair on stage, with his eyes closed while his brainwaves were recorded. The data from the recording was amplified and distributed to speakers set up around the room. The speakers were placed against different types of percussion instruments, so the vibration of the speakers would cause the instrument to sound.
Lucier was able to control the percussion events through control of his cognitive functions, and found that a break in concentration would disrupt that control. Although mastery over the alpha rhythm was (and is) difficult, Music for Solo Performer greatly contributed to the field of experimental music and illustrated the depth of possibility in using EEG control over musical performance.
Computer scientist Jacques J. Vidal published the paper Toward Direct Brain-Computer Communication in 1973, which first proposed the Brain-Computer Interface (BCI), which is a means of using the brain to control external devices.
This was the very beginning of Brain-Computer Music Interfacing (BCMI) research, which has evolved into an interdisciplinary field of study “at the crossroads of music, science and biomedical engineering” [12]. BCMIs (also referred to Brain Machine Interfaces, or BMIs) are still in use today, and the field of research around them is still in its early stages.
Paul Lehrer Ph.D., whom I studied under at UMDNJ, contributed significant research to the field of psychophysics from the 1990s to today, with studies on biofeedback and stress management technics. Dr. Lehrer also set standards for music therapies and their uses as relaxation technics and their beneficial physiological affects by testing benefits amongst subjects with asthma. One of his recent research papers from 2014, Heart Rate Variability Biofeedback: How and Why Does it Work? [14] investigated the effectiveness of heart rate variability biofeedback (HRVB) as a treatment for a variety of disorders, as well as its uses for performance enhancement.
Project Overview:
This project records EEG signals from a subject using four non-invasive dry extra-cranial electrodes from a commercially available MUSE EEG headband. Measurements are recorded from the TP9, AF7, AF8, and TP10 electrodes, as specified by the International Standard EEG placement system, and the data is converted to absolute band powers, based on the logarithm of the Power Spectral Density (PSD) of the EEG data for each channel. Heart rate data is obtained through PPG measurements. EEG measurements are recorded in Bels/Db to determine the PSD within each of the frequency ranges.
The EEG readings are translated into music in real time. The time-base for the musical events can be variable (and/or based on brainwave data), or constrained by a regular clock. The choice of scales, modes, and chords being used, as well as rhythms, and performance characteristics, needed to be carefully considered beforehand so the extraction of a finite set of parameters from the EEG data set could be parsed and used to produce a well-formed and dynamic piece of music.
There are 3 main sections of this system:
1: The EEG data capture section.
2: The EEG data conversion section.
3: the Sound generation and DSP section.
The (1) EEG data capture section receives EEG data from the Muse headset, which is converted to OSC data and transmitted over WiFi via the iOS app Mind Monitor. That data is then split into the five separate brainwave frequency bandwidths: delta, theta, alpha, beta and gamma. Additional data is also captured, including accelerometer, gyroscope, blink, and jaw clench, in order to control for any artifacts in the data capture. Sensor connection data is used to visualize the integrity of the sensor’s attachment to the subject. PPG data is also captured for use in a future iteration of the project.
The (2) EEG data conversion section accepts the EEG bandwidth data representing specific event-related potential, which is then translated into musical events.
This section is comprised of three subsections that format their data output differently, depending on the use case:
1. Internal Sound Generation and DSP
This is for use completely within the Max environment, where the data capture is converted into musical events, and are sonified using synthesis and effects built directly in Max.
2. External MIDI
This is used with MIDI equipped hardware or software,
and
3. External Frequency and gate, for use with modular synthesizer hardware.
Each of these can be used separately or simultaneously, depending on the needs of the piece.
First, upper and lower limits of the brainwave readings are taken for calibration purposes, and significant thresholds for each brainwave frequency bandwidth are defined. These are chosen based on average and optimal EEG measurements taken prior to the generation of the musical feedback. When those thresholds are reached or exceeded, an event is triggered. Depending on the mappings, those events can be one or more of several types of operations: the sounding of a note, a change in pitch or scale or mode, note values and timings, and/or other generative performance characteristics, such as a change in timbre.
For the data conversion, the event-related potentials are mapped in the following way:
Changes in alpha, relative to the predefined threshold, govern pitch.
Changes in theta, relative to the threshold, influence note timing and rhythm, and the triggering of notes/note density (in relation to the values of beta).
Changes in beta, relative to the threshold, influence scale and transposition.
Changes in delta, relative to the threshold, influence spatial qualities like reverberation and delay.
Changes in gamma, relative to the threshold, influence timbre.
Any of these mappings or threshold decisions can be easily changed to accommodate a different thesis, or set of standards.
The third section is (3) Sound generation and DSP. It is responsible for the sonification of the data translated from the EEG data conversion section. This section includes synthesis models, timbral characteristics, and spatial effects.
The Internal Sound Generation and DSP version of this projects uses three synthesized voices created in Max for the generative musical feedback. There are two subtractive voices that each use a mix of sine, sawtooth and triangle waves, and one fm voice.
The timbral effects employed are waveform mixing, frequency modulation, and high pass, band pass, and low pass filters. The spatial effects used include reverberation, and delay. In addition to the initial settings of the voices, each of the timbral and spatial effects are modulated by separate event-related potential data captured by the EEG.
Conclusions:
This project is a contemporary interpretation of an idea I've been interested in for many years, starting with investigation into bidirectional EKG biofeedback.
My initial experience with the subject was during a university degree at Rutgers University in psychophysics (underwritten by The University of Medicine and Dentistry of New Jersey). While at UMDNJ, I had the privilege of working directly with some of the doctors who were at the forefront of psychophysiological research, and whose work was rooted in reducing stress in asthmatic subjects for the purposes of lessening the frequency of attacks. [13]
At the time, the technology required to explore this idea was of considerable size, and prohibitively expensive, for all but medical or formally funded academic purposes. With the current availability of low-cost electroencephalography (EEG) devices and heart rate monitors, the possibility of autonomous exploration of these concepts has become a reality.
Although this project is primarily concerned with changes in the alpha EEG brainwave frequency range, changes in other frequency ranges are used to trigger events in the feedback. This approach was adopted to ensure that a subject’s loss of focus (and/or a drop in the Power Spectral Density of alpha) would not negatively affect the generation of novel musical feedback. With the help of consistent feedback, the subject would be able to regain their focus and continue. Depending on the subject’s state of relaxation (and the PSD of the other four EEG frequency ranges measured), the performance and phrasing of the musical feedback will change in such a way as to encourage greater focus.
For the initial proof of concept trials, I tested a small sampling of subjects. Preliminary data shows that alpha readings were higher, on average, during the therapeutic phase. Also, a higher overall peak value was achieved during the therapeutic phase. This suggests that this feedback model is an effective way of increasing activity in the alpha brainwave frequency range, which is the beneficial physiological and psychological effect I was hoping to find, although much more data needs to be collected before any definitive conclusions can be drawn.
The modular design of the work allows for most any variable to be included or excluded, which will be necessary moving forward with the research, in order to more thoroughly test the foundational elements of the thesis, as well as any musicological exploration and analysis that defining the feedback raises.
In the meantime, in addition to research and data collection, I am using the software as a compositional system to create recorded works and live soundtracks. I am also planning to mount the project as an interactive installation in a live setting.
Contact Details:
Johnny Tomasiello
https://johnnytomasiello.com/
Credits & Acknowledgments:
IRCAM
Cycling ’74
Dr. Paul M. Lehrer and Dr. Richard Carr
InteraXon Muse electroencephalography headband
James Clutterbuck (Mind Monitor developer)
Carol Parkinson, Executive Director of Harvestworks
Melody Loveless, NYU & Max certified trainer
References:
[1] J. J. Bird, A. Ekart, C. D. Buckingham, D. R. Faria. “Mental Emotional Sentiment Classification with an EEG-based Brain-Machine Interface”, International Conference on Digital Image & Signal Processing (DISP’19), Oxford, UK (2019)
[2] K. Madden and G.K. Savard. “Effects of Mental State on Heart Rate and Blood Pressure Variability in Men and Women” in Clinical Physiology 15, 557–569 (1995)
https://pubmed.ncbi.nlm.nih.gov/8590551/
[3] F. Riganello et al. “How Can Music Influence the Autonomic Nervous System Response in Patients with Severe Disorder of Consciousness?” in Frontiers in Neuroscience vol. 9, 461 (2015)
[4] H. Marzbani et al. “Methodological Note: Neurofeedback: A Comprehensive Review on System Design, Methodology and Clinical Applications” in Basic and Clinical Neuroscience Journal vol. 7, 143–158 (2016)
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4892319/
[5] P.M. Lehrer and R. Carr “Stress Management Techniques: Are They All Equivalent, or Do They Have Specific Effects?” in Biofeedback and Self-Regulation” (1994)
https://pubmed.ncbi.nlm.nih.gov/7880911/
[6] J. Ehrhart, M. Toussaint, C. Simon, C. Gronfier, R. Luthringer, G. Brandenberger. “Alpha Activity and Cardiac Correlates: Three Types of Relationships During Nocturnal Sleep” in Clinical Neurophysiology vol. 111, 940–946 (2000)
https://pubmed.ncbi.nlm.nih.gov/10802467/
[7] Amy L. Proskovec, Alex I. Wiesman, and Tony W. Wilson. “The Strength of Alpha and Gamma Oscillations Predicts Behavioral Switch Costs”, Neuroimage, 274–281 (2019 Mar; 188)
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6401274/
[8] J. Cage, R. Kostelanetz. John Cage Writer: Previously Uncollected Pieces.
New York: Limelight (1993)
[9] B. Lutters, P. J. Koehler. “Brainwaves in Concert: the 20th Century Sonification of the Electroencephalogram” in Brain 139 (Pt 10), 2809–2814 (2016)
https://academic.oup.com/brain/article/139/10/2809/2196694#
[10] A Matthews, “The Berger Rhythm: Potential Changes From The Occipital Lobes in Man” in Brain 57 Issue 4, (December 1934)
https://academic.oup.com/brain/article/133/1/3/314887
[11] M Atkinson, MD, “How To Interpret an EEG and its Report” (2010)
https://neurology.med.wayne.edu/pdfs/how_to_interpret_and_eeg_and_its_report.pdf
[12] E.R. Miranda. “Brain–Computer Music Interfacing: Interdisciplinary Research at the Crossroads of Music, Science and Biomedical Engineering” in E.R. Miranda, J. Castet, ed. Guide to Brain-Computer Music Interfacing. London: Springer-Verlag, 1–27 (2014)
[13] P.M. Lehrer et al. “Relaxation and Music Therapies for Asthma among Patients Prestabilized on Asthma Medication” in Journal of Behavioral Medicine 17, 1–24 (1994)
https://pubmed.ncbi.nlm.nih.gov/8201609/
[14] [10] P. M. Lehrer, R. Gevirtz. “Heart Rate Variability Biofeedback: How and Why Does It Work?” in Frontiers in Psychology vol. 5, 756 (2014)