[2506.07459] ProteinZero: Self-Improving Protein Generation via Online Reinforcement Learning
About this article
Abstract page for arXiv paper 2506.07459: ProteinZero: Self-Improving Protein Generation via Online Reinforcement Learning
Computer Science > Machine Learning arXiv:2506.07459 (cs) [Submitted on 9 Jun 2025 (v1), last revised 2 Mar 2026 (this version, v3)] Title:ProteinZero: Self-Improving Protein Generation via Online Reinforcement Learning Authors:Ziwen Wang, Jiajun Fan, Ruihan Guo, Thao Nguyen, Heng Ji, Ge Liu View a PDF of the paper titled ProteinZero: Self-Improving Protein Generation via Online Reinforcement Learning, by Ziwen Wang and 5 other authors View PDF Abstract:Protein generative models have shown remarkable promise in protein design, yet their success rates remain constrained by reliance on curated sequence-structure datasets and by misalignment between supervised objectives and real design goals. We present ProteinZero, an online reinforcement learning framework for inverse folding models that enables scalable, automated, and continuous self-improvement with computationally efficient feedback. ProteinZero employs a reward pipeline that combines structural guidance from ESMFold with a novel self-derived ddG predictor, providing stable multi-objective signals while avoiding the prohibitive cost of physics-based methods. To ensure robustness in online RL, we further introduce a novel embedding-level diversity regularizer that mitigates mode collapse and promotes functionally meaningful sequence variation. Within a general RL formulation balancing multi-reward optimization, KL-divergence from a reference model, and diversity regularization, ProteinZero achieves robust improvements a...