[2603.19255] LARFT: Closing the Cognition-Action Gap for Length Instruction Following in Large Language Models
About this article
Abstract page for arXiv paper 2603.19255: LARFT: Closing the Cognition-Action Gap for Length Instruction Following in Large Language Models
Computer Science > Computation and Language arXiv:2603.19255 (cs) [Submitted on 25 Feb 2026] Title:LARFT: Closing the Cognition-Action Gap for Length Instruction Following in Large Language Models Authors:Wei Zhang, Lintong Du, Yuanhe Zhang, Zhenhong Zhou, Kun Wang, Li Sun, Sen Su View a PDF of the paper titled LARFT: Closing the Cognition-Action Gap for Length Instruction Following in Large Language Models, by Wei Zhang and 6 other authors View PDF Abstract:Despite the strong performance of Large Language Models (LLMs) on complex instruction-following tasks, precise control of output length remains a persistent challenge. Existing methods primarily attempt to enforce length constraints by externally imposing length signals or optimization objectives, while largely overlooking the underlying limitation: the model's intrinsic deficit in length cognition. To address this, we propose LARFT (Length-Aware Reinforcement Fine-Tuning), a training framework that aligns the model's length cognition with its action. Specifically, LARFT integrates length-oriented reinforcement learning with a hindsight length awareness. By transforming on-policy data into hindsight self-awareness tasks where the model learns to identify the actual length of its own generation, LARFT jointly optimizes the model's internal representation of length information and refines its policy to satisfy length constraints, thereby achieving precise and reliable length instruction following. Extensive experiments a...