Communication Acoustics Lab

The COMMA Lab is part of the Centre for Digital Music (C4DM) at Queen Mary University of London, conducting research into the ways people perceive sound and technologies for improving communication.

We work across disciplines of music, engineering, psychology, and cognitive science to understand and model audio communication between humans and between humans and machines.

Our research themes include:

  • Audio/ timbre psychoacoustics and semantics
  • Timbre tools for digital lutherie and interaction
  • Cross-sensory perception involving sound/ timbre
  • Neural audio/ timbre synthesis and processing
  • Social music data science

Founded in 2021, COMMA is led by Charalampos Saitis. Members and collaborators come from across C4DM and the Schools of EECS, SEMS, and SMS, including PhD students on the Artificial Intelligence & Music (UKRI AIM), Data-informed Audience-centric Media Engineering (QMUL-BBC DAME), and China Scholarship Council (QMUL-CSC) programmes.

The lab has received funding from the British Academy, SSHRC-ACTOR, and QMUL’s Centre for Public Engagement.


Dec 2022 Charis is elected member of the Institute of Acoustics and Vice-Chair of the Technical Committee on Musical Acoustics of the European Acoustics Association.
Nov 2022 Remi presents his project on embodiment for intelligent musical systems at RITMO’s workshop on Embodied Perspectives on Musical AI (EmAI).
Oct 2022 New preprings on neural audio synthesis: Sinusoidal Frequency Estimation by Gradient Descent and Differentiable Modal Resonators. Submitted to ICASSP 2023.
Oct 2022 New paper on timbre semantics. We present, a gamified interactive system for crowdsourcing a timbre semantic vocabulary.
Sep 2022 Welcome to new COMMA members Chengye Wu (PhD funded by QMUL-CSC) and Jordie Shier (PhD funded by UKRI AIM)!
Sep 2022 New preprint on music and morality. Can lyrics from favourite artists predict our moral values? Accepted at ISMIR 2022.
Aug 2022 HAID 2022 included two days of exciting workshops, talks, and demos. Check out the proceedings and live streams of Day 1 and Day 2.
Jul 2022 New paper on neural audio synthesis. We combine VAEs with GANs for many-to-many transfer of vocal and instrumental timbre.
Jul 2022 Charis presents the Seeing Music project at the 20th International Multisensory Research Forum (IMRF 2022).
May 2022 New paper on timbre semantics. We introduce a novel methodology to study semantic associations of disembodied electronic sounds.
Dec 2021 C4DM’s Special Interest Group on Neural Audio Synthesis (SIGNAS) hosts the first ever Neural Audio Synthesis Hackathon (NASH).
Nov 2021 Congratulations to Ben for receiving Best Reviewer Award at ISMIR 2021!

Queen Mary logo blue on white     Queen Mary logo blue on white