[2602.15222] Automatically Finding Reward Model Biases
Summary
This article presents a novel approach to identifying biases in reward models used in large language models (LLMs), highlighting the potential for automated bias detection and improvement in AI systems.
Why It Matters
Understanding and mitigating biases in AI reward models is crucial for developing fair and effective AI systems. This research contributes to the field by offering a method to automatically identify biases, which can enhance the interpretability and reliability of AI outputs, ultimately fostering trust in AI technologies.
Key Takeaways
- The study introduces a method for automatically finding biases in reward models using LLMs.
- It demonstrates that evolutionary iteration can outperform traditional search methods in bias detection.
- The approach successfully identifies both known and novel biases in existing reward models.
- Improving reward models through automated interpretability can enhance AI system reliability.
- The findings emphasize the importance of addressing spurious attributes in AI outputs.
Computer Science > Machine Learning arXiv:2602.15222 (cs) [Submitted on 16 Feb 2026] Title:Automatically Finding Reward Model Biases Authors:Atticus Wang, Iván Arcuschin, Arthur Conmy View a PDF of the paper titled Automatically Finding Reward Model Biases, by Atticus Wang and 2 other authors View PDF HTML (experimental) Abstract:Reward models are central to large language model (LLM) post-training. However, past work has shown that they can reward spurious or undesirable attributes such as length, format, hallucinations, and sycophancy. In this work, we introduce and study the research problem of automatically finding reward model biases in natural language. We offer a simple approach of using an LLM to iteratively propose and refine candidate biases. Our method can recover known biases and surface novel ones: for example, we found that Skywork-V2-8B, a leading open-weight reward model, often mistakenly favors responses with redundant spacing and responses with hallucinated content. In addition, we show evidence that evolutionary iteration outperforms flat best-of-N search, and we validate the recall of our pipeline using synthetically injected biases. We hope our work contributes to further research on improving RMs through automated interpretability methods. Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI) Cite as: arXiv:2602.15222 [cs.LG] (or arXiv:2602.15222v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2602.15222 Focus to learn mor...