[2603.19247] When Prompt Optimization Becomes Jailbreaking: Adaptive Red-Teaming of Large Language Models
About this article
Abstract page for arXiv paper 2603.19247: When Prompt Optimization Becomes Jailbreaking: Adaptive Red-Teaming of Large Language Models
Computer Science > Computation and Language arXiv:2603.19247 (cs) [Submitted on 21 Feb 2026] Title:When Prompt Optimization Becomes Jailbreaking: Adaptive Red-Teaming of Large Language Models Authors:Zafir Shamsi, Nikhil Chekuru, Zachary Guzman, Shivank Garg View a PDF of the paper titled When Prompt Optimization Becomes Jailbreaking: Adaptive Red-Teaming of Large Language Models, by Zafir Shamsi and 3 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) are increasingly integrated into high-stakes applications, making robust safety guarantees a central practical and commercial concern. Existing safety evaluations predominantly rely on fixed collections of harmful prompts, implicitly assuming non-adaptive adversaries and thereby overlooking realistic attack scenarios in which inputs are iteratively refined to evade safeguards. In this work, we examine the vulnerability of contemporary language models to automated, adversarial prompt refinement. We repurpose black-box prompt optimization techniques, originally designed to improve performance on benign tasks, to systematically search for safety failures. Using DSPy, we apply three such optimizers to prompts drawn from HarmfulQA and JailbreakBench, explicitly optimizing toward a continuous danger score in the range 0 to 1 provided by an independent evaluator model (GPT-5.1). Our results demonstrate a substantial reduction in effective safety safeguards, with the effects being especially pronounced ...