[2603.20122] Evolving Jailbreaks: Automated Multi-Objective Long-Tail Attacks on Large Language Models
About this article
Abstract page for arXiv paper 2603.20122: Evolving Jailbreaks: Automated Multi-Objective Long-Tail Attacks on Large Language Models
Computer Science > Cryptography and Security arXiv:2603.20122 (cs) [Submitted on 20 Mar 2026] Title:Evolving Jailbreaks: Automated Multi-Objective Long-Tail Attacks on Large Language Models Authors:Wenjing Hong, Zhonghua Rong, Li Wang, Feng Chang, Jian Zhu, Ke Tang, Zexuan Zhu, Yew-Soon Ong View a PDF of the paper titled Evolving Jailbreaks: Automated Multi-Objective Long-Tail Attacks on Large Language Models, by Wenjing Hong and 7 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) have been widely deployed, especially through free Web-based applications that expose them to diverse user-generated inputs, including those from long-tail distributions such as low-resource languages and encrypted private data. This open-ended exposure increases the risk of jailbreak attacks that undermine model safety alignment. While recent studies have shown that leveraging long-tail distributions can facilitate such jailbreaks, existing approaches largely rely on handcrafted rules, limiting the systematic evaluation of these security and privacy vulnerabilities. In this work, we present EvoJail, an automated framework for discovering long-tail distribution attacks via multi-objective evolutionary search. EvoJail formulates long-tail attack prompt generation as a multi-objective optimization problem that jointly maximizes attack effectiveness and minimizes output perplexity, and introduces a semantic-algorithmic solution representation to capture both high-level...