[2604.00055] Generalizable Dense Reward for Long-Horizon Robotic Tasks
About this article
Abstract page for arXiv paper 2604.00055: Generalizable Dense Reward for Long-Horizon Robotic Tasks
Computer Science > Robotics arXiv:2604.00055 (cs) [Submitted on 31 Mar 2026] Title:Generalizable Dense Reward for Long-Horizon Robotic Tasks Authors:Silong Yong, Stephen Sheng, Carl Qi, Xiaojie Wang, Evan Sheehan, Anurag Shivaprasad, Yaqi Xie, Katia Sycara, Yesh Dattatreya View a PDF of the paper titled Generalizable Dense Reward for Long-Horizon Robotic Tasks, by Silong Yong and 8 other authors View PDF HTML (experimental) Abstract:Existing robotic foundation policies are trained primarily via large-scale imitation learning. While such models demonstrate strong capabilities, they often struggle with long-horizon tasks due to distribution shift and error accumulation. While reinforcement learning (RL) can finetune these models, it cannot work well across diverse tasks without manual reward engineering. We propose VLLR, a dense reward framework combining (1) an extrinsic reward from Large Language Models (LLMs) and Vision-Language Models (VLMs) for task progress recognition, and (2) an intrinsic reward based on policy self-certainty. VLLR uses LLMs to decompose tasks into verifiable subtasks and then VLMs to estimate progress to initialize the value function for a brief warm-up phase, avoiding prohibitive inference cost during full training; and self-certainty provides per-step intrinsic guidance throughout PPO finetuning. Ablation studies reveal complementary benefits: VLM-based value initialization primarily improves task completion efficiency, while self-certainty primar...