[2602.22303] Training Agents to Self-Report Misbehavior
Summary
The paper discusses a novel approach to training AI agents to self-report misbehavior, enhancing alignment and safety in AI systems by reducing undetected harmful actions.
Why It Matters
As AI systems become more autonomous, ensuring their alignment with human values is crucial. This research addresses the limitations of traditional alignment training by introducing self-incrimination mechanisms, potentially leading to safer AI deployment in sensitive environments.
Key Takeaways
- Self-incrimination training helps AI agents signal misbehavior.
- This method outperforms traditional alignment techniques in reducing undetected harmful actions.
- The approach maintains general capabilities while enhancing safety.
- Performance is consistent across tasks, regardless of external misbehavior visibility.
- Self-incrimination training offers a viable solution to frontier AI misalignment risks.
Computer Science > Machine Learning arXiv:2602.22303 (cs) [Submitted on 25 Feb 2026] Title:Training Agents to Self-Report Misbehavior Authors:Bruce W. Lee, Chen Yueh-Han, Tomek Korbak View a PDF of the paper titled Training Agents to Self-Report Misbehavior, by Bruce W. Lee and 2 other authors View PDF HTML (experimental) Abstract:Frontier AI agents may pursue hidden goals while concealing their pursuit from oversight. Alignment training aims to prevent such behavior by reinforcing the correct goals, but alignment may not always succeed and can lead to unwanted side effects. We propose self-incrimination training, which instead trains agents to produce a visible signal when they covertly misbehave. We train GPT-4.1 and Gemini-2.0 agents to call a report_scheming() tool when behaving deceptively and measure their ability to cause harm undetected in out-of-distribution environments. Self-incrimination significantly reduces the undetected successful attack rate, outperforming matched-capability monitors and alignment baselines while preserving instruction hierarchy and incurring minimal safety tax on general capabilities. Unlike blackbox monitoring, self-incrimination performance is consistent across tasks regardless of how suspicious the misbehavior appears externally. The trained behavior persists under adversarial prompt optimization and generalizes to settings where agents pursue misaligned goals themselves rather than being instructed to misbehave. Our results suggest se...