[2602.22983] Obscure but Effective: Classical Chinese Jailbreak Prompt Optimization via Bio-Inspired Search
Summary
This paper explores the vulnerabilities of Large Language Models (LLMs) to jailbreak attacks using classical Chinese prompts, proposing a novel optimization framework for effective prompt generation.
Why It Matters
As LLMs become more prevalent, understanding their security weaknesses is crucial. This research highlights how language intricacies can exploit these vulnerabilities, informing future AI safety measures and prompting further investigation into language-based security risks.
Key Takeaways
- Classical Chinese can effectively bypass LLM safety constraints due to its obscurity.
- The proposed CC-BOS framework automates the generation of adversarial prompts.
- The framework utilizes multi-dimensional optimization techniques to enhance attack effectiveness.
- Experimental results show CC-BOS outperforms existing jailbreak methods.
- This research underscores the need for improved security measures in AI systems.
Computer Science > Artificial Intelligence arXiv:2602.22983 (cs) [Submitted on 26 Feb 2026] Title:Obscure but Effective: Classical Chinese Jailbreak Prompt Optimization via Bio-Inspired Search Authors:Xun Huang, Simeng Qin, Xiaoshuang Jia, Ranjie Duan, Huanqian Yan, Zhitao Zeng, Fei Yang, Yang Liu, Xiaojun Jia View a PDF of the paper titled Obscure but Effective: Classical Chinese Jailbreak Prompt Optimization via Bio-Inspired Search, by Xun Huang and 8 other authors View PDF HTML (experimental) Abstract:As Large Language Models (LLMs) are increasingly used, their security risks have drawn increasing attention. Existing research reveals that LLMs are highly susceptible to jailbreak attacks, with effectiveness varying across language contexts. This paper investigates the role of classical Chinese in jailbreak attacks. Owing to its conciseness and obscurity, classical Chinese can partially bypass existing safety constraints, exposing notable vulnerabilities in LLMs. Based on this observation, this paper proposes a framework, CC-BOS, for the automatic generation of classical Chinese adversarial prompts based on multi-dimensional fruit fly optimization, facilitating efficient and automated jailbreak attacks in black-box settings. Prompts are encoded into eight policy dimensions-covering role, behavior, mechanism, metaphor, expression, knowledge, trigger pattern and context; and iteratively refined via smell search, visual search, and cauchy mutation. This design enables effici...