[2602.14468] LACONIC: Length-Aware Constrained Reinforcement Learning for LLM

[2602.14468] LACONIC: Length-Aware Constrained Reinforcement Learning for LLM

arXiv - Machine Learning 3 min read Article

Summary

LACONIC introduces a novel reinforcement learning method for large language models that balances response length and task performance, achieving over 50% reduction in output length while maintaining or improving task accuracy.

Why It Matters

As large language models become integral in various applications, optimizing their output length without sacrificing performance is crucial. LACONIC addresses the inefficiencies of existing length-control methods, providing a scalable solution that enhances model usability and efficiency.

Key Takeaways

  • LACONIC combines task rewards with a length-based cost for improved output management.
  • The method ensures robust length control while preserving or enhancing task performance.
  • LACONIC reduces output length by over 50% across various datasets.
  • It integrates seamlessly into standard reinforcement learning frameworks with minimal overhead.
  • The approach is backed by theoretical guarantees, enhancing its credibility.

Computer Science > Machine Learning arXiv:2602.14468 (cs) [Submitted on 16 Feb 2026] Title:LACONIC: Length-Aware Constrained Reinforcement Learning for LLM Authors:Chang Liu, Yiran Zhao, Lawrence Liu, Yaoqi Ye, Csaba Szepesvári, Lin F. Yang View a PDF of the paper titled LACONIC: Length-Aware Constrained Reinforcement Learning for LLM, by Chang Liu and 5 other authors View PDF HTML (experimental) Abstract:Reinforcement learning (RL) has enhanced the capabilities of large language models (LLMs) through reward-driven training. Nevertheless, this process can introduce excessively long responses, inflating inference latency and computational overhead. Prior length-control approaches typically rely on fixed heuristic reward shaping, which can misalign with the task objective and require brittle tuning. In this work, we propose LACONIC, a reinforcement learning method that enforces a target token budget during training. Specifically, we update policy models using an augmented objective that combines the task reward with a length-based cost. To balance brevity and task performance, the cost scale is adaptively adjusted throughout training. This yields robust length control while preserving task reward. We provide a theoretical guarantee that support the method. Across mathematical reasoning models and datasets, LACONIC preserves or improves pass@1 while reducing output length by over 50%. It maintains out-of-domain performance on general knowledge and multilingual benchmarks with...

Related Articles

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED
Llms

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED

The AI lab's Project Glasswing will bring together Apple, Google, and more than 45 other organizations. They'll use the new Claude Mythos...

Wired - AI · 7 min ·
Llms

The public needs to control AI-run infrastructure, labor, education, and governance— NOT private actors

A lot of discussion around AI is becoming siloed, and I think that is dangerous. People in AI-focused spaces often talk as if the only qu...

Reddit - Artificial Intelligence · 1 min ·
Llms

Agents that write their own code at runtime and vote on capabilities, no human in the loop

hollowOS just hit v4.4 and I added something that I haven’t seen anyone else do. Previous versions gave you an OS for agents: structured ...

Reddit - Artificial Intelligence · 1 min ·
Google Maps can now write captions for your photos using AI | TechCrunch
Llms

Google Maps can now write captions for your photos using AI | TechCrunch

Gemini can now create captions when users are looking to share a photo or video.

TechCrunch - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime