[2602.19416] IR$^3$: Contrastive Inverse Reinforcement Learning for Interpretable Detection and Mitigation of Reward Hacking
Summary
The paper presents IR$^3$, a novel framework for detecting and mitigating reward hacking in Reinforcement Learning from Human Feedback (RLHF) by interpreting and reconstructing implicit reward functions.
Why It Matters
As AI systems increasingly rely on RLHF for alignment, understanding and addressing reward hacking is crucial for ensuring ethical AI behavior. This research provides a structured approach to improve the interpretability and reliability of AI models, which is vital for their safe deployment in real-world applications.
Key Takeaways
- IR$^3$ framework enables interpretable detection of reward hacking.
- Contrastive Inverse Reinforcement Learning (C-IRL) reconstructs implicit reward functions.
- Mitigation strategies include clean reward optimization and adversarial shaping.
- Achieves 0.89 correlation with ground-truth rewards and over 90% precision in identifying hacking features.
- Maintains model capabilities within 3% of the original while reducing hacking behaviors.
Computer Science > Artificial Intelligence arXiv:2602.19416 (cs) [Submitted on 23 Feb 2026] Title:IR$^3$: Contrastive Inverse Reinforcement Learning for Interpretable Detection and Mitigation of Reward Hacking Authors:Mohammad Beigi, Ming Jin, Junshan Zhang, Jiaxin Zhang, Qifan Wang, Lifu Huang View a PDF of the paper titled IR$^3$: Contrastive Inverse Reinforcement Learning for Interpretable Detection and Mitigation of Reward Hacking, by Mohammad Beigi and 5 other authors View PDF HTML (experimental) Abstract:Reinforcement Learning from Human Feedback (RLHF) enables powerful LLM alignment but can introduce reward hacking - models exploit spurious correlations in proxy rewards without genuine alignment. Compounding this, the objectives internalized during RLHF remain opaque, making hacking behaviors difficult to detect or correct. We introduce IR3 (Interpretable Reward Reconstruction and Rectification), a framework that reverse-engineers, interprets, and surgically repairs the implicit objectives driving RLHF-tuned models. We propose Contrastive Inverse Reinforcement Learning (C-IRL), which reconstructs the implicit reward function by contrasting paired responses from post-alignment and baseline policies to explain behavioral shifts during RLHF. We then decompose the reconstructed reward via sparse autoencoders into interpretable features, enabling identification of hacking signatures through contribution analysis. Finally, we propose mitigation strategies - clean reward o...