[2510.15464] Learning to Answer from Correct Demonstrations

[2510.15464] Learning to Answer from Correct Demonstrations

arXiv - Machine Learning 4 min read Article

Summary

The paper explores a novel approach to learning answer generation from correct demonstrations, formalizing it as imitation learning within contextual bandits, and presenting a method that optimizes sample complexity.

Why It Matters

This research addresses the challenge of generating multiple acceptable answers to prompts by leveraging demonstrations, which is crucial for improving AI's ability to understand and respond to complex queries. The findings could enhance the efficiency of training models in various applications, including natural language processing and AI safety.

Key Takeaways

  • Introduces a method for learning from correct demonstrations in answer generation.
  • Formalizes the problem as imitation learning in contextual bandits.
  • Proposes a weaker assumption on the reward model compared to previous works.
  • Demonstrates improved sample complexity and performance over traditional methods.
  • Highlights the potential for application in AI systems requiring nuanced response generation.

Computer Science > Machine Learning arXiv:2510.15464 (cs) [Submitted on 17 Oct 2025 (v1), last revised 26 Feb 2026 (this version, v2)] Title:Learning to Answer from Correct Demonstrations Authors:Nirmit Joshi, Gene Li, Siddharth Bhandari, Shiva Prasad Kasiviswanathan, Cong Ma, Nathan Srebro View a PDF of the paper titled Learning to Answer from Correct Demonstrations, by Nirmit Joshi and 5 other authors View PDF HTML (experimental) Abstract:We study the problem of learning to generate an answer (or completion) to a question (or prompt), where there could be multiple correct answers, any one of which is acceptable at test time. Learning is based on demonstrations of some correct answer to each training question, as in Supervised Fine Tuning (SFT). We formalize the problem as imitation learning (i.e., apprenticeship learning) in contextual bandits, with offline demonstrations from some expert (optimal, or very good) policy, without explicitly observed rewards. In contrast to prior work, which assumes the demonstrator belongs to a bounded-complexity policy class, we propose relying only on the underlying reward model (i.e., specifying which answers are correct) being in a bounded-complexity class, which we argue is a strictly weaker assumption. We show that likelihood-maximization methods can fail in this setting, and instead present an approach that learns to answer nearly as well as the demonstrator, with sample complexity logarithmic in the cardinality of the reward class....

Related Articles

Machine Learning

[R] Are there ML approaches for prioritizing and routing “important” signals across complex systems?

I’ve been reading more about attention mechanisms in transformers and how they effectively learn to weight and prioritize relevant inputs...

Reddit - Machine Learning · 1 min ·
Llms

[P] I trained a language model from scratch for a low resource language and got it running fully on-device on Android (no GPU, demo)

Hi Everybody! I just wanted to share an update on a project I’ve been working on called BULaMU, a family of language models trained (20M,...

Reddit - Machine Learning · 1 min ·
Machine Learning

[R] Structure Over Scale: Memory-First Reasoning and Depth-Pruned Efficiency in Magnus and Seed Architecture Auto-Discovery

Dataset Model Acc F1 Δ vs Log Δ vs Static Avg Params Peak Params Steps Infer ms Size Banking77-20 Logistic TF-IDF 92.37% 0.9230 +0.00pp +...

Reddit - Machine Learning · 1 min ·
UM Computer Scientists Land Grant to Improve Models of Melting Greenland Glaciers
Machine Learning

UM Computer Scientists Land Grant to Improve Models of Melting Greenland Glaciers

Two UM researchers are using advanced neural networks, machine learning and artificial intelligence to improve climate models to better p...

AI News - General · 5 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime