The COMMA Lab is part of the Centre for Digital Music (C4DM) at Queen Mary University of London, headed by Dr Charalampos Saitis. We conduct research into the ways people perceive sound and technologies for improving communication.

We work across disciplines of music, engineering, psychology, and cognitive science to understand and model audio communication between humans and between humans and machines.


Latest News

Feb 2024 New paper on gender, music, toy commercials, and eXplainable AI accepted at the IEEE ICASSP 2024 Workshop on Explainable Machine Learning for Speech and Audio.
Jan 2024 Our review of DDSP for music and speech audio synthesis has been published in Frontiers in Signal Processing. Check out this web book containing practical advice on differentiable synthesiser programming.
Dec 2023 Charis talks to the Inside Europe podcast of Deutsche Welle about music and morals, how they might link through timbre and pitch, and the research opportunities but also ethical dimensions of recent work from the lab.
Dec 2023 Taking a short break from his internship at Sony CSL Paris, Ben has been invited to talk about DDSP and optimisation pathologies in the Machine Learning for Audio Workshop at NeurIPS 2023.
Nov 2023 New paper on music preferences and moral values published in PLOS ONE.
Nov 2023 New paper on the gendering of music in toy commercials accepted at ISMIR 2023!
Oct 2023 New preprints on controllable symbolic music generation with diffusion: Fast Diffusion GAN Model for Symbolic Music Generation Controlled by Emotions and Composer Style-specific Symbolic Music Generation Using Vector Quantized Discrete Diffusion Models.
Oct 2023 Bleiz presents his work Towards a Co-Operative AI Digital Pain Companion at CSCW 2023.
Sep 2023 Welcome to new COMMA member Haokun Tian! Haokun’s PhD research will focus on timbre representations for neural audio synthesis and creative AI tools for music making, in collaboration with Bela and funded by the UKRI CDT in AI & Music.
Sep 2023 Charis received the School of EECS Best Citizen Award for 2023!
Feb 2023 Two new papers on neural audio synthesis accepted at ICASSP 2023: Sinusoidal Frequency Estimation by Gradient Descent and Rigid-Body Sound Synthesis with Differentiable Modal Resonators.
Dec 2022 Charis is elected member of the Institute of Acoustics and Vice-Chair of the Technical Committee on Musical Acoustics of the European Acoustics Association.
Nov 2022 Remi presents his project on embodiment for intelligent musical systems at RITMO’s workshop on Embodied Perspectives on Musical AI (EmAI).
Oct 2022 New preprints on neural audio synthesis: Sinusoidal Frequency Estimation by Gradient Descent and Differentiable Modal Resonators. Submitted to ICASSP 2023.
Oct 2022 New paper on timbre semantics. We present timbre.fun, a gamified interactive system for crowdsourcing a timbre semantic vocabulary.
Sep 2022 Welcome to new COMMA member Jordie Shier! Jordie’s PhD is a collaboration between COMMA, the Augmented Instruments Lab, and Ableton, funded through the UKRI CDT in AI & Music.
Sep 2022 New preprint on music and morality. Can lyrics from favourite artists predict our moral values? Accepted at ISMIR 2022.
Aug 2022 HAID 2022 included two days of exciting workshops, talks, and demos. Check out the proceedings and live streams of Day 1 and Day 2.
Jul 2022 New paper on neural audio synthesis. We combine VAEs with GANs for many-to-many transfer of vocal and instrumental timbre.
Jul 2022 Charis presents the Seeing Music project at the 20th International Multisensory Research Forum (IMRF 2022).
May 2022 New paper on timbre semantics. We introduce a novel methodology to study semantic associations of disembodied electronic sounds.
Dec 2021 C4DM’s Special Interest Group on Neural Audio Synthesis (SIGNAS) hosts the first ever Neural Audio Synthesis Hackathon (NASH).
Nov 2021 Congratulations to Ben for receiving Best Reviewer Award at ISMIR 2021!