[2602.14020] Computable Bernstein Certificates for Cross-Fitted Clipped Covariance Estimation
Summary
This article presents a novel approach to covariance estimation using computable Bernstein certificates, addressing challenges posed by heavy-tailed samples and outliers.
Why It Matters
Covariance estimation is crucial in statistics and machine learning, particularly in the presence of outliers. This research offers a robust method that enhances accuracy and adaptability, which is significant for practitioners dealing with real-world data that often contains anomalies.
Key Takeaways
- Introduces a cross-fitted clipped covariance estimator for improved accuracy.
- Utilizes computable Bernstein-type deviation certificates for data-driven tuning.
- Adapts to intrinsic complexity measures while maintaining performance under finite fourth moments.
- Demonstrates stable performance in contaminated spiked-covariance benchmarks.
- Offers a principled approach to handle heavy-tailed samples with outliers.
Statistics > Machine Learning arXiv:2602.14020 (stat) [Submitted on 15 Feb 2026] Title:Computable Bernstein Certificates for Cross-Fitted Clipped Covariance Estimation Authors:Even He, Zaizai Yan View a PDF of the paper titled Computable Bernstein Certificates for Cross-Fitted Clipped Covariance Estimation, by Even He and 1 other authors View PDF HTML (experimental) Abstract:We study operator-norm covariance estimation from heavy-tailed samples that may include a small fraction of arbitrary outliers. A simple and widely used safeguard is \emph{Euclidean norm clipping}, but its accuracy depends critically on an unknown clipping level. We propose a cross-fitted clipped covariance estimator equipped with \emph{fully computable} Bernstein-type deviation certificates, enabling principled data-driven tuning via a selector (\emph{MinUpper}) that balances certified stochastic error and a robust hold-out proxy for clipping bias. The resulting procedure adapts to intrinsic complexity measures such as effective rank under mild tail regularity and retains meaningful guarantees under only finite fourth moments. Experiments on contaminated spiked-covariance benchmarks illustrate stable performance and competitive accuracy across regimes. Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG) Cite as: arXiv:2602.14020 [stat.ML] (or arXiv:2602.14020v1 [stat.ML] for this version) https://doi.org/10.48550/arXiv.2602.14020 Focus to learn more arXiv-issued DOI via DataCite (pendin...