Audio Laboratory
The Audio Laboratory brings together academics and PhD students advancing the field of immersive audio technologies. It combines theoretical and applied research in the topics of signal processing, virtual acoustics including spatial audio and room acoustics modelling (from efficient methods to wave-based techniques), as well as audio augmented reality systems and psychoacoustics.
The Audio Laboratory is led by Prof. Zoran Cvetkovic and Dr Julie Meyer and is part of the Department of Engineering in the Faculty of Natural, Mathematical & Engineering Sciences at King’s College London. It features an acoustically treated room with a multi-loudspeaker reproduction setup and a moderately reverberant room embedding a motion capture system.
The Music Computing Lab at King’s (βMCL@KCLβ) serves to bring together staff and students across departments who are involved or interested in βsound and music computingβ very broadly defined. This includes generative methods (GenAI, algorithmic composition), analysis (machine learning, data science etc.), the computation of musical metadata, and much more. Like the wider field of music computing, we are friendly and enthusiastic, motivated by a shared passion for the subject, and mutually supportive of all working in the field. The Lab was founded in 2025 by Dr Mark Gotham, and is affiliated with the wider Music and Acoustics Research Centre and the Computational Humanities Research Group.
The Music Theranostics Laboratory is home to studies at the intersection of music information research and the medical sciences, with focus on cardiovascular science. The MIR uses expert insights and knowledge to tackle music representation, expressivity, and cognition. Music-cardiovascular topics range from characterising and stratifying disease through transferrable analytical techniques, to discovering ways to harness cardiovascular reaction to music engagement in digital therapeutics and precision diagnostics for cardiovascular disease. The approach uses data science and optimisation.
The Music Theranostics Lab was founded in 2022 by Professor Elaine Chew. Research at the Lab has been funded by a European Research Council Advanced Grant COSMOS and Proof of Concept Grant HEART.FM, and various doctoral and postdoctoral studentships. Members of the Lab come from disciplines ranging from biomedical engineering and computing to media & arts technology to bioscience and medicine; all are either musicians or have strong interests in music and its performance and/or processing. The Lab is equipped with a reproducing piano, physiological sensors, and bespoke software for music and physiologic data capture, processing, and analysis.
The Lab is associated with the School of Biomedical Engineering & Imaging Sciences (BMEIS) in the Faculty of Life Sciences & Medicine, and the Department of Engineering in the Faculty of Natural, Mathematical & Engineering Sciences at King’s College London (KCL), and has a growing network of medical partners, including St Thomas’ Hospital and Barts Heart Centre.
Voice and Speech Processing for Health Group
Kingβs Voice and Speech Processing for Health Group conducts pioneering research to advance the integration of speech technologies and biomarkers into clinical research and practice. Our research focuses on three core themes:
- We conduct exploratory studies to investigate voice and language biomarkers for assessing symptoms and health states in mental and neurological conditions. We collect and analyse speech samples in the lab and clinic, and remotely with digital mobile health tools.
- We undertake methodological research to examine factors within speech recording and analysis pipelines, evaluating their impact on the reliability of health assessments.
- We test and validate AI-driven speech technologies to assess their reliability and effectiveness across diverse patient populations and operating conditions.
The Lab was founded in 2024 and is led by Dr Nicholas Cummins. The lab is associated with the Department of Biostatistics & Health Informatics and the CAMHS Digital Lab





