[2507.04446] Sampling-aware Adversarial Attacks Against Large Language Models
Summary
This article presents a novel approach to adversarial attacks on large language models (LLMs) by incorporating sampling strategies, significantly improving attack efficiency and success rates.
Why It Matters
As LLMs become increasingly integrated into various applications, ensuring their robustness against adversarial attacks is crucial for safety. This research highlights the importance of sampling in evaluating and enhancing LLM security, addressing a gap in existing methodologies that often overlook the stochastic nature of these models.
Key Takeaways
- Incorporating sampling into adversarial attacks can boost success rates by up to 37%.
- Existing attack strategies often underestimate the stochastic nature of LLMs.
- The study introduces a resource allocation framework for optimizing attacks.
- Common optimization strategies may have limited impact on harmful output.
- A new label-free objective based on entropy maximization is proposed.
Computer Science > Machine Learning arXiv:2507.04446 (cs) [Submitted on 6 Jul 2025 (v1), last revised 22 Feb 2026 (this version, v4)] Title:Sampling-aware Adversarial Attacks Against Large Language Models Authors:Tim Beyer, Yan Scholten, Leo Schwinn, Stephan Günnemann View a PDF of the paper titled Sampling-aware Adversarial Attacks Against Large Language Models, by Tim Beyer and 3 other authors View PDF Abstract:To guarantee safe and robust deployment of large language models (LLMs) at scale, it is critical to accurately assess their adversarial robustness. Existing adversarial attacks typically target harmful responses in single-point greedy generations, overlooking the inherently stochastic nature of LLMs and overestimating robustness. We show that for the goal of eliciting harmful responses, repeated sampling of model outputs during the attack complements prompt optimization and serves as a strong and efficient attack vector. By casting attacks as a resource allocation problem between optimization and sampling, we empirically determine compute-optimal trade-offs and show that integrating sampling into existing attacks boosts success rates by up to 37\% and improves efficiency by up to two orders of magnitude. We further analyze how distributions of output harmfulness evolve during an adversarial attack, discovering that many common optimization strategies have little effect on output harmfulness. Finally, we introduce a label-free proof-of-concept objective based on en...