[2503.10503] Sample Compression for Self Certified Continual Learning
Summary
The paper introduces Continual Pick-to-Learn (CoP2L), a method for continual learning that uses sample compression to mitigate catastrophic forgetting while providing computable learning guarantees.
Why It Matters
This research addresses a critical challenge in machine learning—catastrophic forgetting—by proposing a method that not only retains important samples but also certifies the reliability of learned predictors. This has implications for developing more robust AI systems capable of learning continuously from diverse tasks.
Key Takeaways
- CoP2L provides a principled approach to sample retention in continual learning.
- The method offers computable upper bounds on generalization loss, enhancing reliability.
- Empirical results show CoP2L's competitive performance against baseline methods.
Computer Science > Machine Learning arXiv:2503.10503 (cs) [Submitted on 13 Mar 2025 (v1), last revised 26 Feb 2026 (this version, v4)] Title:Sample Compression for Self Certified Continual Learning Authors:Jacob Comeau, Mathieu Bazinet, Pascal Germain, Cem Subakan View a PDF of the paper titled Sample Compression for Self Certified Continual Learning, by Jacob Comeau and 3 other authors View PDF HTML (experimental) Abstract:Continual learning algorithms aim to learn from a sequence of tasks. In order to avoid catastrophic forgetting, most existing approaches rely on heuristics and do not provide computable learning guarantees. In this paper, we introduce Continual Pick-to-Learn (CoP2L), a method grounded in sample compression theory that retains representative samples for each task in a principled and efficient way. This allows us to derive non-vacuous, numerically computable upper bounds on the generalization loss of the learned predictors after each task. We evaluate CoP2L on standard continual learning benchmarks under Class-Incremental and Task-Incremental settings, showing that it effectively mitigates catastrophic forgetting. It turns out that CoP2L is empirically competitive with baseline methods while certifying predictor reliability in continual learning with a non-vacuous bound. Subjects: Machine Learning (cs.LG) Cite as: arXiv:2503.10503 [cs.LG] (or arXiv:2503.10503v4 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2503.10503 Focus to learn more arX...