[2509.22876] HEART: Emotionally-Driven Test-Time Scaling of Language Models
Summary
The paper presents HEART, a framework that leverages emotional cues to enhance the reasoning capabilities of language models during test-time scaling, demonstrating improved accuracy across various benchmarks.
Why It Matters
This research highlights the potential of integrating emotional intelligence into AI models, suggesting that emotional cues can significantly enhance decision-making processes. As AI continues to evolve, understanding how emotions can influence reasoning may lead to more effective and human-like AI systems.
Key Takeaways
- HEART utilizes emotional cues to guide AI reasoning.
- The framework alternates between critical and encouraging tones to improve model performance.
- Results show consistent accuracy gains over traditional methods.
- HEART was evaluated across seven challenging benchmarks.
- The study suggests emotional regulation could be key to future AI advancements.
Computer Science > Computation and Language arXiv:2509.22876 (cs) [Submitted on 26 Sep 2025 (v1), last revised 12 Feb 2026 (this version, v5)] Title:HEART: Emotionally-Driven Test-Time Scaling of Language Models Authors:Gabriela Pinto, Palash Goyal, Mihir Parmar, Yiwen Song, Souradip Chakraborty, Zifeng Wang, Jinsung Yoon, Hamid Palangi, Tomas Pfister View a PDF of the paper titled HEART: Emotionally-Driven Test-Time Scaling of Language Models, by Gabriela Pinto and 8 other authors View PDF HTML (experimental) Abstract:Test-time scaling has significantly improved how AI models solve problems, yet current methods often get stuck in repetitive, incorrect patterns of thought. We introduce HEART, a framework that uses emotional cues to guide the model's focus, much like how feelings contribute to human decision-making. By alternating between critical tones to sharpen error detection and encouraging tones to spark new ideas, HEART helps the model break out of dead-end reasoning and find the right solution. We evaluate HEART across seven high-difficulty benchmarks--including Humanity's Last Exam, GPQA Diamond, and LiveCodeBench--demonstrating robustness across diverse models. Results show that emotion facilitates deeper reasoning, yielding consistent accuracy gains over affect-sterile baselines. These findings suggest that the next frontier in machine reasoning lies in the strategic integration of affective regulation to guide logical synthesis. Subjects: Computation and Languag...