[2602.23128] Bound to Disagree : Generalization Bounds via Certifiable Surrogates

[2602.23128] Bound to Disagree : Generalization Bounds via Certifiable Surrogates

arXiv - Machine Learning 3 min read Article

Summary

The paper presents new disagreement-based certificates for generalization bounds in deep learning models, addressing limitations of existing methods and demonstrating empirical effectiveness across various frameworks.

Why It Matters

Generalization bounds are crucial for understanding the performance of machine learning models. This research offers a novel approach to derive tighter bounds without altering the target model, enhancing the reliability of predictions in machine learning applications.

Key Takeaways

  • Introduces disagreement-based certificates for generalization bounds.
  • Demonstrates empirical effectiveness on unlabeled datasets.
  • Achieves tight bounds without modifying the target model.
  • Utilizes three frameworks: sample compression, model compression, and PAC-Bayes theory.
  • Addresses common issues with existing generalization bounds.

Computer Science > Machine Learning arXiv:2602.23128 (cs) [Submitted on 26 Feb 2026] Title:Bound to Disagree : Generalization Bounds via Certifiable Surrogates Authors:Mathieu Bazinet, Valentina Zantedeschi, Pascal Germain View a PDF of the paper titled Bound to Disagree : Generalization Bounds via Certifiable Surrogates, by Mathieu Bazinet and Valentina Zantedeschi and Pascal Germain View PDF HTML (experimental) Abstract:Generalization bounds for deep learning models are typically vacuous, not computable or restricted to specific model classes. In this paper, we tackle these issues by providing new disagreement-based certificates for the gap between the true risk of any two predictors. We then bound the true risk of the predictor of interest via a surrogate model that enjoys tight generalization guarantees, and evaluating our disagreement bound on an unlabeled dataset. We empirically demonstrate the tightness of the obtained certificates and showcase the versatility of the approach by training surrogate models leveraging three different frameworks: sample compression, model compression and PAC-Bayes theory. Importantly, such guarantees are achieved without modifying the target model, nor adapting the training procedure to the generalization framework. Subjects: Machine Learning (cs.LG) Cite as: arXiv:2602.23128 [cs.LG]   (or arXiv:2602.23128v1 [cs.LG] for this version)   https://doi.org/10.48550/arXiv.2602.23128 Focus to learn more arXiv-issued DOI via DataCite (pending r...

Related Articles

[2603.16790] InCoder-32B: Code Foundation Model for Industrial Scenarios
Llms

[2603.16790] InCoder-32B: Code Foundation Model for Industrial Scenarios

Abstract page for arXiv paper 2603.16790: InCoder-32B: Code Foundation Model for Industrial Scenarios

arXiv - AI · 4 min ·
[2603.16430] EngGPT2: Sovereign, Efficient and Open Intelligence
Llms

[2603.16430] EngGPT2: Sovereign, Efficient and Open Intelligence

Abstract page for arXiv paper 2603.16430: EngGPT2: Sovereign, Efficient and Open Intelligence

arXiv - AI · 4 min ·
[2603.13846] Is Seeing Believing? Evaluating Human Sensitivity to Synthetic Video
Machine Learning

[2603.13846] Is Seeing Believing? Evaluating Human Sensitivity to Synthetic Video

Abstract page for arXiv paper 2603.13846: Is Seeing Believing? Evaluating Human Sensitivity to Synthetic Video

arXiv - AI · 3 min ·
[2603.13294] Real-World AI Evaluation: How FRAME Generates Systematic Evidence to Resolve the Decision-Maker's Dilemma
Machine Learning

[2603.13294] Real-World AI Evaluation: How FRAME Generates Systematic Evidence to Resolve the Decision-Maker's Dilemma

Abstract page for arXiv paper 2603.13294: Real-World AI Evaluation: How FRAME Generates Systematic Evidence to Resolve the Decision-Maker...

arXiv - AI · 4 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime