[2602.16334] Spatial Audio Question Answering and Reasoning on Dynamic Source Movements
Summary
This article presents a study on Spatial Audio Question Answering (Spatial AQA) focusing on dynamic sound source movements, introducing innovative frameworks and techniques for improved audio understanding.
Why It Matters
As spatial audio technology advances, understanding how machines interpret complex auditory scenes becomes crucial. This research offers insights into enhancing audio processing capabilities, which can benefit various applications in AI, robotics, and immersive media.
Key Takeaways
- Introduces a movement-centric spatial audio augmentation framework for training data generation.
- Proposes an end-to-end multimodal finetuning approach that enhances reasoning in audio-language models.
- Demonstrates the effectiveness of query-conditioned source separation in improving audio understanding.
- Findings indicate significant improvements in reasoning capabilities when processing dynamic sound sources.
- Highlights the interplay between movement modeling, reasoning, and audio separation quality.
Computer Science > Sound arXiv:2602.16334 (cs) [Submitted on 18 Feb 2026] Title:Spatial Audio Question Answering and Reasoning on Dynamic Source Movements Authors:Arvind Krishna Sridhar, Yinyi Guo, Erik Visser View a PDF of the paper titled Spatial Audio Question Answering and Reasoning on Dynamic Source Movements, by Arvind Krishna Sridhar and 2 other authors View PDF HTML (experimental) Abstract:Spatial audio understanding aims to enable machines to interpret complex auditory scenes, particularly when sound sources move over time. In this work, we study Spatial Audio Question Answering (Spatial AQA) with a focus on movement reasoning, where a model must infer object motion, position, and directional changes directly from stereo audio. First, we introduce a movement-centric spatial audio augmentation framework that synthesizes diverse motion patterns from isolated mono audio events, enabling controlled and scalable training data generation. Second, we propose an end-to-end multimodal finetuning approach with a thinking mode, which allows audio-language models to produce explicit intermediate reasoning steps before predicting an answer. Third, we investigate the impact of query-conditioned source separation as a preprocessing stage and compare three inference regimes: no masking, an audio grounding model (AGM), and ground-truth masks. Our results show that reasoning amplifies the benefits of source separation, with thinking mode showing significant improvement of +5.1% whe...