[2602.15238] Closing the Distribution Gap in Adversarial Training for LLMs
Summary
This article discusses a novel approach to adversarial training for large language models (LLMs), proposing Distributional Adversarial Training (DAT) to enhance robustness against vulnerabilities in prompt handling.
Why It Matters
As LLMs are increasingly integrated into various applications, their susceptibility to adversarial attacks poses significant risks. This research addresses a critical gap in existing adversarial training methods, offering a solution that could improve the reliability and safety of LLMs in real-world scenarios.
Key Takeaways
- Current adversarial training methods inadequately cover data distributions, leading to vulnerabilities.
- Distributional Adversarial Training (DAT) leverages diffusion models to enhance sample diversity.
- DAT combines optimization over data distribution with continuous adversarial training for improved robustness.
- The proposed method shows significantly higher adversarial robustness compared to existing techniques.
- Addressing these vulnerabilities is crucial for the safe deployment of LLMs in sensitive applications.
Computer Science > Machine Learning arXiv:2602.15238 (cs) [Submitted on 16 Feb 2026] Title:Closing the Distribution Gap in Adversarial Training for LLMs Authors:Chengzhi Hu, Jonas Dornbusch, David Lüdke, Stephan Günnemann, Leo Schwinn View a PDF of the paper titled Closing the Distribution Gap in Adversarial Training for LLMs, by Chengzhi Hu and 4 other authors View PDF HTML (experimental) Abstract:Adversarial training for LLMs is one of the most promising methods to reliably improve robustness against adversaries. However, despite significant progress, models remain vulnerable to simple in-distribution exploits, such as rewriting prompts in the past tense or translating them into other languages. We argue that this persistent fragility stems from a fundamental limitation in current adversarial training algorithms: they minimize adversarial loss on their training set but inadequately cover the data distribution, resulting in vulnerability to seemingly simple attacks. To bridge this gap, we propose Distributional Adversarial Training, DAT. We leverage Diffusion LLMs to approximate the true joint distribution of prompts and responses, enabling generation of diverse, high-likelihood samples that address generalization failures. By combining optimization over the data distribution provided by the diffusion model with continuous adversarial training, DAT achieves substantially higher adversarial robustness than previous methods. Subjects: Machine Learning (cs.LG); Artificial In...