Workshop Date:
Mon, 07/21/2025 – Fri, 08/01/2025
Location:
Centre for Computer Research in Music and Acoustics, Stanford University
REGISTRATION
Virtual Acoustics for Immersive Audio
Virtual acoustics is the simulation of sound propagation in a virtual or real environment, involving technologies to model how sound travels, reflects, and interacts with the space. It is essential for creating immersive and convincing audio-visual experiences in extended reality (XR). This course will cover the fundamentals of spatial audio technology, with an emphasis on recreating virtual acoustics for AR and VR. The first part of the course will cover reverberation, which models the propagation of sound from a source placed in the environment to the listener, and the complex interactions of the sound waves with the environment. The second part of the course will focus on spatialisation, which is how sound is localised in space. We will discuss rendering out-loud through multiple loudspeakers and binaurally through headphones. There will be theory lectures and hands-on learning with lab assignments (in Python) and demos which will make use of CCRMAβs Studio E with 22.1 surround sound setup and the 3D listening room.
Format: In-person (preferred), hybrid
Requirements: Mathematics background at the college level, familiarity with linear algebra. Some Digital Signal Processing (DSP) knowledge is ideal. Experience with a scientific programming language. Assignments can be done on the CCRMA workstations, but preferable to bring your own laptop and headphones.
Instructors
Orchisama Das is a postdoctoral researcher at Kingβs College London. Prior to this, she was a Senior Audio Research Scientist at Sonos, and a Research Fellow at the Institute of Sound Recording at University of Surrey. She received her PhD from CCRMA in 2021, during which she interned at Tesla and Meta Reality Labs. Her research interests are artificial reverberation and room acoustics modeling, with a focus on real-time room acoustics rendering with delay networks.
Gloria Dal Santo received the M.Sc. degree in electrical and electronic engineering from the Ecole Polytechnique FΓ©dΓ©rale de Lausanne, Lausanne, Switzerland in 2022. She is currently working toward the Doctoral degree with the Acoustics Lab, Aalto University, Espoo, Finland. Her research interests include artificial reverberation and audio applications of machine learning.
Scholarship opportunities
A limited number of scholarships are available for students and individuals from underrepresented backgrounds in the field. The application deadline is EOD on May 16th (AoE). If interested, please complete the questionnaire at this link .
Tentative schedule
Lectures will be divided into theoretical parts (morning) and hands-on exercises and demos (afternoon).
Week 1: Room Acoustics and Artificial Reverberation
Day 1: Introduction to room acoustics
- Physics of sound
- Wave equations
- Room modes
- Acoustic parameters
- Energy decay and reverberation time
- Directivity
- Echo density
Day 2: Artificial reverberation
- Overview of history of artificial reveberation (physical and analog reverbs)
- Digital artificial reverberation methods
- Convolution
- Wave based methods
- Geometric methods
- Noise-based methods
Day 3: Artificial Reverberation with Delay Networks
- Schroeder’s allpass reverberator
- Moorer’s frequency dependent reverberator
- Jot and Chaigne’s Feedback Delay Network (FDN)
- Designing FDN parameters
- Adding control over reverberation time in FDNs
- Scattering Delay Networks
Day 4: Differentiable artfiicial reverberation
- Overview of supervised learning and training pipeline
- Differentiable DSP (DDSP)
- Audio loss functions
- Frequency domain optimization using FLAMO library
- Differentiable FDNs for artificial reverberation
Day 5: Modeling acoustics in coupled spaces
- Multi-slope energy decay
- Auralisation of coupled room acoustics
- Grouped Feedback Delay Networks
- Coupled Volume Scattering Delay Networks
- Hybrid image source + FDN
- Room impulse response synthesis for 6DoF rendering in Augmented Reality
- Common slopes model
- Neural acoustical fields
- Differentiable Grouped Feedback Delay Networks
Week 2: Spatial Audio
Day 6: 3D sound perception and binaural rendering
- Sound localization and acoustic cues
- Externalization and collapse
- Head related transfer functions (HRTFs) and binaural rendering
- Binaural room impulse responses (BRIRs)
Day 7: Multichannel rendering
- Microphone arrays
- Vector based amplitude panning (VBAP)
- Ambisonics
Day 8: Parametric spatial RIR processing
- Spatial room impulse responses (SRIRs)
- Spatial Decomposition Method (SDM)
- Spatial Impulse Response Rendering (SRIR)
- Rendering from arbitrary microphone array recordings
Day 9: Spatial reverberation with delay networks
- Directional Feedback Delay Network (DFDN)
- Spatialized Scattering Delay Network
- Decorrelating FDNs for multichannel late reverb rendering
Day 10: Guest lecture and demo
Source: https://ccrma.stanford.edu/workshops/virtual-acoustics-immersive-audio