This release contains a number of small but important fixes related to real-time corpus recording, as well as a new tutorial for app users.

  • Recording Latency Correction: The somax.audiorecord object will now automatically adjust the recorded slices based on the latency of the associated audioinfluencer in order to achieve better segmentation. This parameter can be controlled in the corpus recording settings.

  • Audiorecord Sample Rate Mismatch: The somax.audiorecord object now provides explicit warnings when the user tries to record into an existing corpus based on an audio file with a different sample rate than Max's. This release also fixes a number of bugs related to issues with underlying buffer sample rates. See the "sample rate mismatch" tab of the somax.audiorecord maxhelp.

  • Real-time Corpus Reloading: When using multiple record-enabled players, it's now possible to load a corpus into either of the players without causing audio glitches or interrupts to the other players while loading.
  • "Script your Environment" Tutorial: A new tutorial on preparing your Somax2 environment and controlling the parameters of any .app-object using scripting messages has been added.

  • Various Bug Fixes: A number of bug fixes and clarifications have been added, as well as documentation updates.

Goto to Somax2 Forum page for installation

See more at Somax2 Project Page

Somax2 is an application for musical improvisation and composition using AI with machine listening, cognitive memory activation model, multi-agent architecture, full application interface to agent patching and control, and full Max library API. Somax2 is implemented in Max and Python and is based on a generative AI model to provide real-time machine improvisations coherent both with the internal selected corpus styles and with the unfolding external musical context. Somax2 handles both MIDI and audio input, corpus memory, and output. The model can be used with little configuration to let its agents autonomously interact with musicians (and one with another), but it also allows a variety of manual controls of its generative process and interaction strategies, effectively letting one use it as a fully flexible smart instrument.