[2602.16154] Balancing Faithfulness and Performance in Reasoning via Multi-Listener Soft Execution

[2602.16154] Balancing Faithfulness and Performance in Reasoning via Multi-Listener Soft Execution

arXiv - AI 4 min read Article

Summary

The paper presents REMUL, a multi-party reinforcement learning approach that enhances the faithfulness of reasoning in large language models (LLMs) while maintaining performance. It highlights the tradeoff between interpretability and accuracy in chain-of-thought reasoning.

Why It Matters

As LLMs increasingly influence decision-making processes, ensuring their reasoning is both faithful and effective is crucial. This research addresses the challenge of balancing interpretability with performance, providing a framework that could improve the reliability of AI systems in critical applications.

Key Takeaways

  • REMUL improves reasoning faithfulness in LLMs through multi-listener execution.
  • The approach addresses the tradeoff between faithfulness and task performance.
  • Improvements are observed across multiple reasoning benchmarks.
  • The method enhances clarity and correctness of reasoning traces.
  • Shorter and more direct chain-of-thought outputs are achieved.

Computer Science > Computation and Language arXiv:2602.16154 (cs) [Submitted on 18 Feb 2026] Title:Balancing Faithfulness and Performance in Reasoning via Multi-Listener Soft Execution Authors:Nithin Sivakumaran, Shoubin Yu, Hyunji Lee, Yue Zhang, Ali Payani, Mohit Bansal, Elias Stengel-Eskin View a PDF of the paper titled Balancing Faithfulness and Performance in Reasoning via Multi-Listener Soft Execution, by Nithin Sivakumaran and 6 other authors View PDF HTML (experimental) Abstract:Chain-of-thought (CoT) reasoning sometimes fails to faithfully reflect the true computation of a large language model (LLM), hampering its utility in explaining how LLMs arrive at their answers. Moreover, optimizing for faithfulness and interpretability in reasoning often degrades task performance. To address this tradeoff and improve CoT faithfulness, we propose Reasoning Execution by Multiple Listeners (REMUL), a multi-party reinforcement learning approach. REMUL builds on the hypothesis that reasoning traces which other parties can follow will be more faithful. A speaker model generates a reasoning trace, which is truncated and passed to a pool of listener models who "execute" the trace, continuing the trace to an answer. Speakers are rewarded for producing reasoning that is clear to listeners, with additional correctness regularization via masked supervised finetuning to counter the tradeoff between faithfulness and performance. On multiple reasoning benchmarks (BIG-Bench Extra Hard, Mu...

Related Articles

Llms

Claude Max 20x usage hit 40% by Monday noon — how does Codex CLI compare?

I'm on Claude Max (the $100/mo plan) and noticed something that surprised me. By Monday noon I had already used 40% of the 20x monthly li...

Reddit - Artificial Intelligence · 1 min ·
How to use the new ChatGPT app integrations, including DoorDash, Spotify, Uber, and others | TechCrunch
Llms

How to use the new ChatGPT app integrations, including DoorDash, Spotify, Uber, and others | TechCrunch

Learn how to use Spotify, Canva, Figma, Expedia, and other apps directly in ChatGPT.

TechCrunch - AI · 10 min ·
Anthropic Restricts Claude Agent Access Amid AI Automation Boom in Crypto
Llms

Anthropic Restricts Claude Agent Access Amid AI Automation Boom in Crypto

AI Tools & Products · 7 min ·
Is cutting ‘please’ when talking to ChatGPT better for the planet? An expert explains
Llms

Is cutting ‘please’ when talking to ChatGPT better for the planet? An expert explains

AI Tools & Products · 5 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime