AI and Brain-Computer Interface for sound design generation and musical instrument control through Emotion and Focus Recognition - Tommaso Colafiglio, Fabrizio Festa, Tommaso Di Noia

Our project is devoted to processing [information retrieval] electroencephalogram signals to control sound generation. Moreover, our software can control the parameters of any virtual musical instrument. We use the Muse EEG Headset (a non-invasive brain-computer interface - BCI) to recognize electroencephalogram signals.

Presented by: Fabrizio Festa, Tommaso Colafiglio and Tommaso Di Noia

Biography

-

Shortly, this is our AI-based software’s structure:
1) A deep learning model architecture can generate sound textures.
2) An emotion recognition system conditions the deep learning model.
3) A specific machine learning pipeline can recognize human emotions.

So, we can directly control some parameters of virtual instruments both consciously and unconsciously. This process is possible because we have developed an advanced machine-learning model to recognize the user’s emotions. Through this process, we identify human emotion to interact with the sound synthesis of any virtual instruments.

-

We will present two systems for generating sound textures and controlling sounds using two Brain-Computer Interfaces and several Machine Learning and Deep Learning models. 

Specifically, we will focus the workshop and the demonstration on the illustration of a neural musical instrument and a timbre generation system conditioned by the user's emotions. 

Neural Musical Instrument: Through the BCI Muse EEG Headset, we can extract some features from the electroencephalographic signal that allow us to detect the state of brain activation of the user in real-time. To accomplish this, we trained an ML model that can classify the user's state of conscious concentration. Subsequently, by applying a specific analysis protocol on the EEG signal, we predict the continuous activation value of the user's mental state. Consequently, we control three parameters of a Virtual Instrument for the conscious modulation of the sound of the neural musical instrument. 

Emotional Sound Texture Generation: Through a dataset collected in the SisInfLab laboratory at the Polytechnic University of Bari, we trained an Emotion Recognition model with the BCI Muse EEG Headset. This model can detect the user's predominant emotion in real time. Once we have obtained the classification value of the emotion experienced by the user, we use this classification value to condition the generation of timbres through Deep Learning models. These models were pre-trained with datasets of original samples produced by the research team.

-

Back to the event