[2604.03506] BioAlchemy: Distilling Biological Literature into Reasoning-Ready Reinforcement Learning Training Data
About this article
Abstract page for arXiv paper 2604.03506: BioAlchemy: Distilling Biological Literature into Reasoning-Ready Reinforcement Learning Training Data
Computer Science > Artificial Intelligence arXiv:2604.03506 (cs) [Submitted on 3 Apr 2026] Title:BioAlchemy: Distilling Biological Literature into Reasoning-Ready Reinforcement Learning Training Data Authors:Brian Hsu, Ozan Gökdemir, Carlo Siebenschuh, Bruce Parrello, Neil Getty, Thomas S. Brettin, Rick L. Stevens, Ian T. Foster, Nicholas Chia, Arvind Ramanathan View a PDF of the paper titled BioAlchemy: Distilling Biological Literature into Reasoning-Ready Reinforcement Learning Training Data, by Brian Hsu and 9 other authors View PDF HTML (experimental) Abstract:Despite the large corpus of biology training text, the impact of reasoning models on biological research generally lags behind math and coding. In this work, we show that biology questions from current large-scale reasoning datasets do not align well with modern research topic distributions in biology, and that this topic imbalance may negatively affect performance. In addition, we find that methods for extracting challenging and verifiable research problems from biology research text are a critical yet underdeveloped ingredient in applying reinforcement learning for better performance on biology research tasks. We introduce BioAlchemy, a pipeline for sourcing a diverse set of verifiable question-and-answer pairs from a scientific corpus of biology research text. We curate BioAlchemy-345K, a training dataset containing over 345K scientific reasoning problems in biology. Then, we demonstrate how aligning our datas...