[2602.20971] Does Order Matter : Connecting The Law of Robustness to Robust Generalization
Summary
This paper explores the relationship between the law of robustness and robust generalization in machine learning, providing a framework to understand how model characteristics impact performance on unseen data.
Why It Matters
Understanding the connection between robustness and generalization is crucial for developing machine learning models that perform reliably in real-world scenarios. This research addresses an open problem in the field, offering insights that could influence future model design and training methodologies.
Key Takeaways
- Introduces a nontrivial notion of robust generalization error.
- Establishes a connection between robust training loss and test loss.
- Demonstrates that Lipschitz constants are critical for robust generalization.
- Empirical results align with theoretical predictions regarding model capacity.
- Provides lower bounds on expected Rademacher complexity for robust loss classes.
Computer Science > Machine Learning arXiv:2602.20971 (cs) [Submitted on 24 Feb 2026] Title:Does Order Matter : Connecting The Law of Robustness to Robust Generalization Authors:Himadri Mandal, Vishnu Varadarajan, Jaee Ponde, Aritra Das, Mihir More, Debayan Gupta View a PDF of the paper titled Does Order Matter : Connecting The Law of Robustness to Robust Generalization, by Himadri Mandal and 5 other authors View PDF HTML (experimental) Abstract:Bubeck and Sellke (2021) pose as an open problem the connection between the law of robustness and robust generalization. The law of robustness states that overparameterization is necessary for models to interpolate robustly; in particular, robust interpolation requires the learned function to be Lipschitz. Robust generalization asks whether small robust training loss implies small robust test loss. We resolve this problem by explicitly connecting the two for arbitrary data distributions. Specifically, we introduce a nontrivial notion of robust generalization error and convert it into a lower bound on the expected Rademacher complexity of the induced robust loss class. Our bounds recover the $\Omega(n^{1/d})$ regime of Wu et al.\ (2023) and show that, up to constants, robust generalization does not change the order of the Lipschitz constant required for smooth interpolation. We conduct experiments to probe the predicted scaling with dataset size and model capacity, testing whether empirical behavior aligns more closely with the predi...