[2511.17879] Generative Adversarial Post-Training Mitigates Reward Hacking in Live Human-AI Music Interaction
Summary
This paper presents a novel method using generative adversarial training to address reward hacking in real-time human-AI music interactions, enhancing creativity and adaptability.
Why It Matters
As AI systems increasingly engage in collaborative tasks, such as music jamming, ensuring they maintain creativity and responsiveness is crucial. This research provides a solution to reward hacking, a significant issue in reinforcement learning that can hinder the effectiveness of AI in dynamic environments.
Key Takeaways
- Introduces a generative adversarial training method to improve AI music interaction.
- Addresses the issue of reward hacking that reduces output diversity in AI systems.
- Demonstrates improved adaptability and user agency in live music settings.
- Utilizes both quantitative evaluations and user studies to validate findings.
- Highlights the importance of maintaining creativity in AI collaborations.
Computer Science > Machine Learning arXiv:2511.17879 (cs) [Submitted on 22 Nov 2025 (v1), last revised 15 Feb 2026 (this version, v3)] Title:Generative Adversarial Post-Training Mitigates Reward Hacking in Live Human-AI Music Interaction Authors:Yusong Wu, Stephen Brade, Aleksandra Teng Ma, Tia-Jane Fowler, Enning Yang, Berker Banar, Aaron Courville, Natasha Jaques, Cheng-Zhi Anna Huang View a PDF of the paper titled Generative Adversarial Post-Training Mitigates Reward Hacking in Live Human-AI Music Interaction, by Yusong Wu and 8 other authors View PDF HTML (experimental) Abstract:Most applications of generative AI involve a sequential interaction in which a person inputs a prompt and waits for a response, and where reaction time and adaptivity are not important factors. In contrast, live jamming is a collaborative interaction that requires real-time coordination and adaptation without access to the other player's future moves, while preserving diversity to sustain a creative flow. Reinforcement learning post-training enables effective adaptation through on-policy interaction, yet it often reduces output diversity by exploiting coherence-based rewards. This collapse, known as ``reward hacking'', affects many RL post-training pipelines, but is especially harmful in live jamming, where musical creativity relies on dynamic variation and mutual responsiveness. In this paper, we propose a novel adversarial training method on policy-generated trajectories to mitigate reward ha...