[2602.14172] Investigation for Relative Voice Impression Estimation
Summary
This article explores Relative Voice Impression Estimation (RIE), focusing on how different speech modeling approaches affect listener perceptions of voice characteristics.
Why It Matters
Understanding how voice impressions are perceived can enhance applications in fields like speech recognition, emotion detection, and human-computer interaction. This research highlights the effectiveness of self-supervised models over traditional methods, offering insights for future developments in AI-driven voice technologies.
Key Takeaways
- RIE predicts perceptual differences between two utterances from the same speaker.
- Self-supervised speech representations outperform classical acoustic features in capturing complex voice impressions.
- Current multimodal large language models struggle with fine-grained pairwise tasks.
Computer Science > Sound arXiv:2602.14172 (cs) [Submitted on 15 Feb 2026] Title:Investigation for Relative Voice Impression Estimation Authors:Keinichi Fujita, Yusuke Ijima View a PDF of the paper titled Investigation for Relative Voice Impression Estimation, by Keinichi Fujita and 1 other authors View PDF Abstract:Paralinguistic and non-linguistic aspects of speech strongly influence listener impressions. While most research focuses on absolute impression scoring, this study investigates relative voice impression estimation (RIE), a framework for predicting the perceptual difference between two utterances from the same speaker. The estimation target is a low-dimensional vector derived from subjective evaluations, quantifying the perceptual shift of the second utterance relative to the first along an antonymic axis (e.g., ``Dark--Bright''). To isolate expressive and prosodic variation, we used recordings of a professional speaker reading a text in various styles. We compare three modeling approaches: classical acoustic features commonly used for speech emotion recognition, self-supervised speech representations, and multimodal large language models (MLLMs). Our results demonstrate that models using self-supervised representations outperform methods with classical acoustic features, particularly in capturing complex and dynamic impressions (e.g., ``Cold--Warm'') where classical features fail. In contrast, current MLLMs prove unreliable for this fine-grained pairwise task. T...