[2602.16256] Color-based Emotion Representation for Speech Emotion Recognition

[2602.16256] Color-based Emotion Representation for Speech Emotion Recognition

arXiv - AI 3 min read Article

Summary

This article presents a novel approach to Speech Emotion Recognition (SER) by utilizing color attributes to represent emotions, enhancing the interpretability and performance of SER systems through machine learning techniques.

Why It Matters

The study addresses limitations in traditional SER methods by introducing color-based representations, which could lead to more nuanced and accurate emotion detection in speech. This innovation has implications for various applications, including human-computer interaction, mental health monitoring, and customer service.

Key Takeaways

  • Color attributes can effectively represent emotions in speech.
  • The study developed regression models for color attributes using machine learning.
  • Multitask learning improved the performance of emotion classification tasks.
  • Crowdsourced emotional speech corpus enhances the study's validity.
  • This approach may lead to more interpretable and diverse emotion recognition systems.

Electrical Engineering and Systems Science > Audio and Speech Processing arXiv:2602.16256 (eess) [Submitted on 18 Feb 2026] Title:Color-based Emotion Representation for Speech Emotion Recognition Authors:Ryotaro Nagase, Ryoichi Takashima, Yoichi Yamashita View a PDF of the paper titled Color-based Emotion Representation for Speech Emotion Recognition, by Ryotaro Nagase and 1 other authors View PDF HTML (experimental) Abstract:Speech emotion recognition (SER) has traditionally relied on categorical or dimensional labels. However, this technique is limited in representing both the diversity and interpretability of emotions. To overcome this limitation, we focus on color attributes, such as hue, saturation, and value, to represent emotions as continuous and interpretable scores. We annotated an emotional speech corpus with color attributes via crowdsourcing and analyzed them. Moreover, we built regression models for color attributes in SER using machine learning and deep learning, and explored the multitask learning of color attribute regression and emotion classification. As a result, we demonstrated the relationship between color attributes and emotions in speech, and successfully developed color attribute regression models for SER. We also showed that multitask learning improved the performance of each task. Comments: Subjects: Audio and Speech Processing (eess.AS); Artificial Intelligence (cs.AI); Sound (cs.SD) Cite as: arXiv:2602.16256 [eess.AS]   (or arXiv:2602.16256v1 ...

Related Articles

Machine Learning

[D] ML researcher looking to switch to a product company.

Hey, I am an AI researcher currently working in a deep tech company as a data scientist. Prior to this, I was doing my PhD. My current ro...

Reddit - Machine Learning · 1 min ·
Machine Learning

Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P]

Hey guys, I’m the same creator of Netryx V2, the geolocation tool. I’ve been working on something new called COGNEX. It learns how a pers...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] bitnet-edge: Ternary-weight CNNs ({-1,0,+1}) on MNIST and CIFAR-10, deployed to ESP32-S3 with zero multiplications

I built a pipeline that takes ternary-quantized CNNs from PyTorch training all the way to bare-metal inference on an ESP32-S3 microcontro...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] What surprised us while collecting training data from the public web been pulling training data from public web

been pulling training data from public web sources for a bit now. needed it to scale, not return complete garbage, and not immediately bl...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime