[2603.00253] CoPeP: Benchmarking Continual Pretraining for Protein Language Models
About this article
Abstract page for arXiv paper 2603.00253: CoPeP: Benchmarking Continual Pretraining for Protein Language Models
Computer Science > Machine Learning arXiv:2603.00253 (cs) [Submitted on 27 Feb 2026] Title:CoPeP: Benchmarking Continual Pretraining for Protein Language Models Authors:Darshan Patil, Pranshu Malviya, Mathieu Reymond, Quentin Fournier, Sarath Chandar View a PDF of the paper titled CoPeP: Benchmarking Continual Pretraining for Protein Language Models, by Darshan Patil and 4 other authors View PDF HTML (experimental) Abstract:Protein language models (pLMs) have recently gained significant attention for their ability to uncover relationships between sequence, structure, and function from evolutionary statistics, thereby accelerating therapeutic drug discovery. These models learn from large protein databases that are continuously updated by the biology community and whose dynamic nature motivates the application of continual learning, not only to keep up with the ever-growing data, but also as an opportunity to take advantage of the temporal meta-information that is created during this process. As a result, we introduce the Continual Pretraining of Protein Language Models (CoPeP) benchmark, a novel benchmark for evaluating continual learning approaches on pLMs. Specifically, we curate a sequence of protein datasets derived from the UniProt Knowledgebase spanning a decade and define metrics to assess pLM performance across 31 protein understanding tasks. We evaluate several methods from the continual learning literature, including replay, unlearning, and plasticity-based method...