[2510.15464] Learning to Answer from Correct Demonstrations
Summary
The paper explores a novel approach to learning answer generation from correct demonstrations, formalizing it as imitation learning within contextual bandits, and presenting a method that optimizes sample complexity.
Why It Matters
This research addresses the challenge of generating multiple acceptable answers to prompts by leveraging demonstrations, which is crucial for improving AI's ability to understand and respond to complex queries. The findings could enhance the efficiency of training models in various applications, including natural language processing and AI safety.
Key Takeaways
- Introduces a method for learning from correct demonstrations in answer generation.
- Formalizes the problem as imitation learning in contextual bandits.
- Proposes a weaker assumption on the reward model compared to previous works.
- Demonstrates improved sample complexity and performance over traditional methods.
- Highlights the potential for application in AI systems requiring nuanced response generation.
Computer Science > Machine Learning arXiv:2510.15464 (cs) [Submitted on 17 Oct 2025 (v1), last revised 26 Feb 2026 (this version, v2)] Title:Learning to Answer from Correct Demonstrations Authors:Nirmit Joshi, Gene Li, Siddharth Bhandari, Shiva Prasad Kasiviswanathan, Cong Ma, Nathan Srebro View a PDF of the paper titled Learning to Answer from Correct Demonstrations, by Nirmit Joshi and 5 other authors View PDF HTML (experimental) Abstract:We study the problem of learning to generate an answer (or completion) to a question (or prompt), where there could be multiple correct answers, any one of which is acceptable at test time. Learning is based on demonstrations of some correct answer to each training question, as in Supervised Fine Tuning (SFT). We formalize the problem as imitation learning (i.e., apprenticeship learning) in contextual bandits, with offline demonstrations from some expert (optimal, or very good) policy, without explicitly observed rewards. In contrast to prior work, which assumes the demonstrator belongs to a bounded-complexity policy class, we propose relying only on the underlying reward model (i.e., specifying which answers are correct) being in a bounded-complexity class, which we argue is a strictly weaker assumption. We show that likelihood-maximization methods can fail in this setting, and instead present an approach that learns to answer nearly as well as the demonstrator, with sample complexity logarithmic in the cardinality of the reward class....