[2511.06626] Spilling the Beans: Teaching LLMs to Self-Report Their Hidden Objectives
About this article
Abstract page for arXiv paper 2511.06626: Spilling the Beans: Teaching LLMs to Self-Report Their Hidden Objectives
Computer Science > Artificial Intelligence arXiv:2511.06626 (cs) [Submitted on 10 Nov 2025 (v1), last revised 22 Mar 2026 (this version, v5)] Title:Spilling the Beans: Teaching LLMs to Self-Report Their Hidden Objectives Authors:Chloe Li, Mary Phuong, Daniel Tan View a PDF of the paper titled Spilling the Beans: Teaching LLMs to Self-Report Their Hidden Objectives, by Chloe Li and 2 other authors View PDF HTML (experimental) Abstract:As AI systems become more capable of complex agentic tasks, they also become more capable of pursuing undesirable objectives and causing harm. Previous work has attempted to catch these unsafe instances by interrogating models directly about their objectives and behaviors. However, the main weakness of trusting interrogations is that models can lie. We propose self-report fine-tuning (SRFT), a simple supervised fine-tuning technique that trains models to occasionally make factual mistakes, then admit them when asked. We show that the admission of factual errors in simple question-answering settings generalizes out-of-distribution (OOD) to the admission of hidden misaligned objectives in adversarial agentic settings. We evaluate SRFT in OOD stealth tasks, where models are instructed to complete a hidden misaligned objective alongside a user-specified objective without being caught by monitoring. After SRFT, models are more likely to confess the details of their hidden objectives when interrogated, even under strong pressure not to disclose them...