➡️ This presentation is part of IRCAM Forum Workshops Paris / Enghien-les-Bains March 2026

bellplay~ Workshop
bellplay~ is a scripting environment for audio and music composition. Unlike real-time environments such as Max or SuperCollider, it renders sound offline, which opens up techniques such as multi-pass and look-ahead processing, computationally expensive operations, non-causal behavior, and analysis-driven transformations — without the constraints of CPU limits or voice management. It also bridges symbolic, notation-based operations with audio, making it suitable for acoustic, electronic, and mixed music within a single environment.
For more information, visit: https://bellplay.net.
Before
You will receive setup instructions in advance. Please install and configure bellplay~ on your laptop (Mac or Windows) before the session so we can use the time efficiently.
During
The workshop builds progressively toward writing your own granular processing script, using this as a vehicle for introducing the environment and its core features. The session runs 60 minutes.
Foundations: launching the environment, writing and running a script, generating a simple sound and placing it on a timeline, and rendering audio to disk.
Core project — granulation: importing a sample into a buffer; generating grains by defining onset position, duration, gain, and stereo placement through scripted rules; looping and layering grains to build dense textures; and rendering, adjusting parameters, and re-rendering iteratively.
Optional topics (time permitting): running basic analysis — such as onset detection or spectral centroid — and using the results to shape grain behavior; exporting rendered audio or buffer data for use outside the environment.
After
By the end of the session you will have a working granulation script and a clear understanding of how bellplay~ handles audio algorithmically. You will also receive example scripts and documentation to continue working independently after the workshop.
This workshop is aimed at composers and sound artists interested in direct, fine-grained control over sound through code, without the constraints of real-time interaction or visual programming.