[2511.14565] Masked IRL: LLM-Guided Reward Disambiguation from Demonstrations and Language
About this article
Abstract page for arXiv paper 2511.14565: Masked IRL: LLM-Guided Reward Disambiguation from Demonstrations and Language
Computer Science > Robotics arXiv:2511.14565 (cs) [Submitted on 18 Nov 2025 (v1), last revised 30 Mar 2026 (this version, v2)] Title:Masked IRL: LLM-Guided Reward Disambiguation from Demonstrations and Language Authors:Minyoung Hwang, Alexandra Forsey-Smerek, Nathaniel Dennler, Andreea Bobu View a PDF of the paper titled Masked IRL: LLM-Guided Reward Disambiguation from Demonstrations and Language, by Minyoung Hwang and 3 other authors View PDF HTML (experimental) Abstract:Robots can adapt to user preferences by learning reward functions from demonstrations, but with limited data, reward models often overfit to spurious correlations and fail to generalize. This happens because demonstrations show robots how to do a task but not what matters for that task, causing the model to focus on irrelevant state details. Natural language can more directly specify what the robot should focus on, and, in principle, disambiguate between many reward functions consistent with the demonstrations. However, existing language-conditioned reward learning methods typically treat instructions as simple conditioning signals, without fully exploiting their potential to resolve ambiguity. Moreover, real instructions are often ambiguous themselves, so naive conditioning is unreliable. Our key insight is that these two input types carry complementary information: demonstrations show how to act, while language specifies what is important. We propose Masked Inverse Reinforcement Learning (Masked IRL), ...