[2507.00026] RedTopic: Toward Topic-Diverse Red Teaming of Large Language Models
About this article
Abstract page for arXiv paper 2507.00026: RedTopic: Toward Topic-Diverse Red Teaming of Large Language Models
Computer Science > Machine Learning arXiv:2507.00026 (cs) [Submitted on 17 Jun 2025 (v1), last revised 24 Mar 2026 (this version, v2)] Title:RedTopic: Toward Topic-Diverse Red Teaming of Large Language Models Authors:Jiale Ding, Xiang Zheng, Yutao Wu, Cong Wang, Wei-Bin Lee, Ling Pan, Xingjun Ma, Yu-Gang Jiang View a PDF of the paper titled RedTopic: Toward Topic-Diverse Red Teaming of Large Language Models, by Jiale Ding and 7 other authors View PDF HTML (experimental) Abstract:As large language models (LLMs) are increasingly deployed as black-box components in real-world applications, red teaming has become essential for identifying potential risks. It tests LLMs with adversarial prompts to uncover vulnerabilities and improve safety alignment. Ideally, effective red teaming should be adaptive to evolving LLM capabilities and explore a broad range of harmful topics. However, existing approaches face two limitations: 1) topic-based approaches rely on pre-collected harmful topics, limited in flexibility and adaptivity. 2) topic-free methods use reinforcement learning (RL), but they lack an explicit reward signal for exploration and tend to over-optimize a narrow objective, reducing topic diversity. To address these limitations, we propose RedTopic, a novel red teaming framework that generates topic-diverse adversarial prompts through a contextualized generation pipeline, an aggregate reward design, and a multi-objective RL training loop. Experiments show that RedTopic produ...