Abstract :
We demonstrate a camera-based gesture recognition system that enables individual and collaborative control of virtual drums aiming at co-located or distributed collaborative music making. Our presentation highlights three features of this work-in-progress tool: i) calibration mode, ii) individual performance (both static and mobile) and iii) collaborative group performance. Calibration targets flexibility to address differences in camera positioning, body characteristics and camera-player distances. Once the user has set up the virtual drums, the tracking system adapts to the user body movements and to her distance from the camera. Visual referents are dynamically placed and scaled to the changes in body positions allowing the participants to move freely. Hence, playing is compatible with dance-like motion. The individual-performance mode involves both visual referents and sonic feedback through the detection of the upper limbs movements. Targeting collaborative usage, a client-server architecture was implemented to support synchronous interactions on multiple machines. The drum sounds are rendered locally, relying on exchanges of MIDI control data. Thus, the virtual drum prototype can be played either locally or remotely with fairly low usage of bandwidth. Our presentation includes audiovisual demonstrations of the prototypes and explores its potential for ubiquitous music applications. We will discuss limitations and opportunities for deployment in everyday settings highlighting adaptive strategies to handle temporal drift and data losses, while keeping the interaction design compatible with casual usage. We will also provide examples of exploratory deployments involving participants with and without musical training.
Authors: Sutirtha Chakraborty, Azeema Yaseen, Joseph Timoney and Damian Keller
Bio :
Sutirtha Chakraborty is PhD researcher that is highly passionate about science.
"I strive for excellence in my contributions. Since I started my undergraduate studies, my interests have always included Artificial Intelligence, Robotics, Audio Processing, and Music Technology. I am an active member of our 6-person lab team in Music Tech and AI. I have 5 peer-reviewed publications including one in a Springer Nature journal. I also have experience working with hardware and sensors.
Additionally, I am a part-time Computer Science, lecturer, tutor, and demonstrator at school, undergraduate and postgraduate levels. My thesis is to investigate problems of human-robot synchronization in a musical context where the robot will react with and engage in time and in phase to the rhythm of human musicians. To observe the beat pattern generated by the human musicians I am using a combination of signal analysis and video processing, bolstered by AI techniques to improve the accuracy.
I have created a leader-follower tempo analysis system, that has built upon what I have discovered about rhythmic sway and bodily pose and exploiting this to identify beat patterns from video data. Alongside my academic activities, I have been designing and developing software solutions at various levels adopting different programming languages, technologies and methodologies. I have created several interfaces for creative music-making and met the public at the Dublin Maker Fair in 2019. Prior to coming to Ireland, I have taken the roles of software engineer and technology leader in a start-up building home automation technologies.
"
Azeema Yaseen
Azeema Yaseen is a third-year Ph.D. candidate in the Department of Computer Science at Maynooth University, Ireland. Her bachelor’s (2015) and master’s (2017) degrees are in the field of Computer Science from Lahore College for Women University (LCWU), Pakistan. She has been a lecturer at the Department of Computer Science and Software Engineering at the University of Gujrat (UoG) and the University of Management and Technology (UMT), Lahore from 2016 to 2019 respectively. From her interest in the Internet of Things and big data during her master’s to the current research work, she is concerned with the applications of the Internet of Musical Things (IoMusT) and Human-Computer Interaction. In her Ph.D. research, she is striving to create better interface and interaction frameworks in musical contexts where amateurs are the primary users. This research is also applicable to systems that employ multimodal interactions in the domain of digital wellbeing.
Joe Timoney
Dr. Joe Timoney joined the Dept. of Computer Science at Maynooth University in 1999. He teaches on undergraduate programs in Computer Science and in Music Technology. His research interests are based in the areas of Software Engineering and Audio signal processing, with a focus on musical applications. He has supervised a number of Ph.D. students. In 2003 he spent a 3- month research visit at ATR laboratory in Kyoto, Japan, and in 2010 to the College of Computing at Zhejiang University, Hangzhou, China. He also is a keen DIY electronics enthusiast and has built a number of electronic instruments.
Damián Keller
Damián Keller is an associate professor of music technology at the Federal University of Acre and the Federal University of Paraíba in Brazil. He is a co-founder of the international research network Ubiquitous Music Group and a founding member of the Amazon Center for Music Research (NAP). He has published over two hundred articles on ubiquitous music and ecologically grounded creative practice in journals on information technology, design, education, philosophy, and the arts.
His latest co-edited book is Ubiquitous Music Ecologies (Routledge).
Back to the abstracts collection