[2602.20574] GATES: Self-Distillation under Privileged Context with Consensus Gating
Summary
The paper presents GATES, a self-distillation method for document-grounded question answering, enhancing model performance by leveraging consensus gating under unreliable supervision.
Why It Matters
This research addresses the challenge of training models without reliable labels, which is common in real-world applications. By improving the accuracy of document-free question answering, it has implications for AI systems that require robust performance in uncertain environments.
Key Takeaways
- GATES utilizes self-distillation to improve learning from unreliable supervision.
- The method enhances accuracy in document-grounded question answering.
- Consensus gating allows for dynamic supervision based on model agreement.
- Empirical results show significant performance improvements on benchmarks.
- This approach is relevant for applications needing robust AI in uncertain contexts.
Computer Science > Machine Learning arXiv:2602.20574 (cs) [Submitted on 24 Feb 2026] Title:GATES: Self-Distillation under Privileged Context with Consensus Gating Authors:Alex Stein, Furong Huang, Tom Goldstein View a PDF of the paper titled GATES: Self-Distillation under Privileged Context with Consensus Gating, by Alex Stein and 2 other authors View PDF HTML (experimental) Abstract:We study self-distillation in settings where supervision is unreliable: there are no ground truth labels, verifiable rewards, or external graders to evaluate answers. We focus on document-grounded question answering with asymmetric context, where a single model serves as both tutor (with access to a relevant source document during training) and student (answering from the question alone at test time). Rather than assuming tutor correctness, we derive supervision online from tutor consensus by sampling multiple document-grounded reasoning traces and using agreement to gate learning. Conditioned on this reliability signal, we distill knowledge through full tutor reasoning trajectories (not just final answers), providing a dense and stable learning signal. Empirically, this consensus-gated trajectory distillation substantially improves transfer to the document-free student. Held-out in-domain accuracy under asymmetric evaluation improves from 46.0\% to 62.0\%, and average (maj@8) accuracy on public document-free math benchmarks improves from 20.2\% to 35.4\%. Comments: Subjects: Machine Learning ...