[2602.23003] Scattering Transform for Auditory Attention Decoding
Summary
This paper explores the use of a scattering transform for auditory attention decoding, comparing its effectiveness against traditional preprocessing methods in neural network models.
Why It Matters
As the demand for hearing aids rises, improving auditory attention decoding is crucial for enhancing user experience. This research presents a novel approach that could lead to better performance in distinguishing sounds in complex environments, addressing a significant challenge in audio processing.
Key Takeaways
- The scattering transform shows promise in improving auditory attention decoding.
- It outperforms traditional preprocessing methods in specific classification tasks.
- Performance varies based on the dataset and model used, indicating the need for tailored approaches.
Electrical Engineering and Systems Science > Signal Processing arXiv:2602.23003 (eess) [Submitted on 26 Feb 2026] Title:Scattering Transform for Auditory Attention Decoding Authors:René Pallenberg, Fabrice Katzberg, Alfred Mertins, Marco Maass View a PDF of the paper titled Scattering Transform for Auditory Attention Decoding, by Ren\'e Pallenberg and 3 other authors View PDF HTML (experimental) Abstract:The use of hearing aids will increase in the coming years due to demographic change. One open problem that remains to be solved by a new generation of hearing aids is the cocktail party problem. A possible solution is electroencephalography-based auditory attention decoding. This has been the subject of several studies in recent years, which have in common that they use the same preprocessing methods in most cases. In this work, in order to achieve an advantage, the use of a scattering transform is proposed as an alternative to these preprocessing methods. The two-layer scattering transform is compared with a regular filterbank, the synchrosqueezing short-time Fourier transform and the common preprocessing. To demonstrate the performance, the known and the proposed preprocessing methods are compared for different classification tasks on two widely used datasets, provided by the KU Leuven (KUL) and the Technical University of Denmark (DTU). Both established and new neural-network-based models, CNNs, LSTMs, and recent Transformer/graph-based models are used for classificatio...