➡️ This presentation is part of IRCAM Forum Workshops Paris / Enghien-les-Bains March 2026
<table> <tbody> <tr> <td>
</td>
</tr>
</tbody>
</table>
The present work introduces SMLP – Sound Multi-Layer Perceptron, a system that employs an artificial neural network as both a musical instrument and a cognitive–computational exploration environment. The architecture consists of: (i) a Problem Generator that constructs synthetic, parameterisable, and controllable datasets; and (ii) a Learning Engine developed from scratch (MLP) with variable depth and width, operating under a real-time supervised learning paradigm. The system integrates a 14-channel Emotiv Brain–Computer Interface (BCI) to acquire the user's raw EEG signal. The signals are preprocessed and analysed using our pretrained machine learning models, which can estimate, in real time, several mental states: focus, stress, engagement, arousal, valence, and frontal asymmetry. These neurophysiological indices are not used solely as visualisation or external control parameters; rather, they directly influence the deep neural network's learning dynamics. Specifically, mental states modulate the backpropagation process as an adaptive optimiser interacting with weight updates. EEG-derived metrics directly influence regularisation coefficients, thereby configuring a human-in-the-loop learning paradigm in which the user's cognitive condition becomes an integral component of the optimisation function. Training thus assumes a neuroadaptive dimension, whereby the network learns as a function of the mental state detected in real time. A multimodal GUI renders the model's internal state both visually and audibly during training. Weights and biases are mapped onto the parameters of a multi-oscillator additive sound synthesis system, transforming optimisation dynamics into acoustic material. The system further incorporates a real-time feedback loop among the interface, the neural network, and the user's neurophysiological state, thereby generating a closed ecosystem in which machine-learning processes and brain activity co-evolve. SMLP therefore proposes an operational framework that integrates technical analysis, auditory perception, and neurocognitive regulation. It offers a tool for research through auditory and physiological monitoring of the training steps as well as for artistic practice, introducing a form of adaptive neural composition guided by the performer's mental state. The system architecture illustrates the platform's overall framework, delineating its constituent components and their interactions across distinct processing stages.