[2603.01264] S2O: Enhancing Adversarial Training with Second-Order Statistics of Weights
About this article
Abstract page for arXiv paper 2603.01264: S2O: Enhancing Adversarial Training with Second-Order Statistics of Weights
Computer Science > Machine Learning arXiv:2603.01264 (cs) [Submitted on 1 Mar 2026] Title:S2O: Enhancing Adversarial Training with Second-Order Statistics of Weights Authors:Gaojie Jin, Xinping Yi, Wei Huang, Sven Schewe, Xiaowei Huang View a PDF of the paper titled S2O: Enhancing Adversarial Training with Second-Order Statistics of Weights, by Gaojie Jin and 4 other authors View PDF HTML (experimental) Abstract:Adversarial training has emerged as a highly effective way to improve the robustness of deep neural networks (DNNs). It is typically conceptualized as a min-max optimization problem over model weights and adversarial perturbations, where the weights are optimized using gradient descent methods, such as SGD. In this paper, we propose a novel approach by treating model weights as random variables, which paves the way for enhancing adversarial training through \textbf{S}econd-Order \textbf{S}tatistics \textbf{O}ptimization (S$^2$O) over model weights. We challenge and relax a prevalent, yet often unrealistic, assumption in prior PAC-Bayesian frameworks: the statistical independence of weights. From this relaxation, we derive an improved PAC-Bayesian robust generalization bound. Our theoretical developments suggest that optimizing the second-order statistics of weights can substantially tighten this bound. We complement this theoretical insight by conducting an extensive set of experiments that demonstrate that S$^2$O not only enhances the robustness and generalization...