[2507.04446] Sampling-aware Adversarial Attacks Against Large Language Models

[2507.04446] Sampling-aware Adversarial Attacks Against Large Language Models

arXiv - Machine Learning 4 min read Article

Summary

This article presents a novel approach to adversarial attacks on large language models (LLMs) by incorporating sampling strategies, significantly improving attack efficiency and success rates.

Why It Matters

As LLMs become increasingly integrated into various applications, ensuring their robustness against adversarial attacks is crucial for safety. This research highlights the importance of sampling in evaluating and enhancing LLM security, addressing a gap in existing methodologies that often overlook the stochastic nature of these models.

Key Takeaways

  • Incorporating sampling into adversarial attacks can boost success rates by up to 37%.
  • Existing attack strategies often underestimate the stochastic nature of LLMs.
  • The study introduces a resource allocation framework for optimizing attacks.
  • Common optimization strategies may have limited impact on harmful output.
  • A new label-free objective based on entropy maximization is proposed.

Computer Science > Machine Learning arXiv:2507.04446 (cs) [Submitted on 6 Jul 2025 (v1), last revised 22 Feb 2026 (this version, v4)] Title:Sampling-aware Adversarial Attacks Against Large Language Models Authors:Tim Beyer, Yan Scholten, Leo Schwinn, Stephan Günnemann View a PDF of the paper titled Sampling-aware Adversarial Attacks Against Large Language Models, by Tim Beyer and 3 other authors View PDF Abstract:To guarantee safe and robust deployment of large language models (LLMs) at scale, it is critical to accurately assess their adversarial robustness. Existing adversarial attacks typically target harmful responses in single-point greedy generations, overlooking the inherently stochastic nature of LLMs and overestimating robustness. We show that for the goal of eliciting harmful responses, repeated sampling of model outputs during the attack complements prompt optimization and serves as a strong and efficient attack vector. By casting attacks as a resource allocation problem between optimization and sampling, we empirically determine compute-optimal trade-offs and show that integrating sampling into existing attacks boosts success rates by up to 37\% and improves efficiency by up to two orders of magnitude. We further analyze how distributions of output harmfulness evolve during an adversarial attack, discovering that many common optimization strategies have little effect on output harmfulness. Finally, we introduce a label-free proof-of-concept objective based on en...

Related Articles

Llms

What if Claude purposefully made its own code leakable so that it would get leaked

What if Claude leaked itself by socially and architecturally engineering itself to be leaked by a dumb human submitted by /u/smurfcsgoawp...

Reddit - Artificial Intelligence · 1 min ·
Llms

Observer-Embedded Reality

Observer-Embedded Reality Consciousness, Complexity, Meaning, and the Limits of Human Knowledge A Conceptual Philosophy-of-Science Paper ...

Reddit - Artificial Intelligence · 1 min ·
Llms

I think we’re about to have a new kind of “SEO”… and nobody is talking about it.

More people are asking ChatGPT things like: “what’s the best CRM?” “is this tool worth it?” “alternatives to X” And they just… trust the ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Why would Claude give me the same response over and over and give others different replies?

I asked Claude to "generate me a random word" so I could do some word play. Then I asked it again in a new prompt window on desktop after...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime