[2604.02555] Robust Learning with Optimal Error
About this article
Abstract page for arXiv paper 2604.02555: Robust Learning with Optimal Error
Computer Science > Data Structures and Algorithms arXiv:2604.02555 (cs) [Submitted on 2 Apr 2026] Title:Robust Learning with Optimal Error Authors:Guy Blanc View a PDF of the paper titled Robust Learning with Optimal Error, by Guy Blanc View PDF Abstract:We construct algorithms with optimal error for learning with adversarial noise. The overarching theme of this work is that the use of \textsl{randomized} hypotheses can substantially improve upon the best error rates achievable with deterministic hypotheses. - For $\eta$-rate malicious noise, we show the optimal error is $\frac{1}{2} \cdot \eta/(1-\eta)$, improving on the optimal error of deterministic hypotheses by a factor of $1/2$. This answers an open question of Cesa-Bianchi et al. (JACM 1999) who showed randomness can improve error by a factor of $6/7$. - For $\eta$-rate nasty noise, we show the optimal error is $\frac{3}{2} \cdot \eta$ for distribution-independent learners and $\eta$ for fixed-distribution learners, both improving upon the optimal $2 \eta$ error of deterministic hypotheses. This closes a gap first noted by Bshouty et al. (Theoretical Computer Science 2002) when they introduced nasty noise and reiterated in the recent works of Klivans et al. (NeurIPS 2025) and Blanc et al. (SODA 2026). - For $\eta$-rate agnostic noise and the closely related nasty classification noise model, we show the optimal error is $\eta$, improving upon the optimal $2\eta$ error of deterministic hypotheses. All of our learners ...