[2603.28015] What an Autonomous Agent Discovers About Molecular Transformer Design: Does It Transfer?
About this article
Abstract page for arXiv paper 2603.28015: What an Autonomous Agent Discovers About Molecular Transformer Design: Does It Transfer?
Computer Science > Artificial Intelligence arXiv:2603.28015 (cs) [Submitted on 30 Mar 2026] Title:What an Autonomous Agent Discovers About Molecular Transformer Design: Does It Transfer? Authors:Edward Wijaya View a PDF of the paper titled What an Autonomous Agent Discovers About Molecular Transformer Design: Does It Transfer?, by Edward Wijaya View PDF HTML (experimental) Abstract:Deep learning models for drug-like molecules and proteins overwhelmingly reuse transformer architectures designed for natural language, yet whether molecular sequences benefit from different designs has not been systematically tested. We deploy autonomous architecture search via an agent across three sequence types (SMILES, protein, and English text as control), running 3,106 experiments on a single GPU. For SMILES, architecture search is counterproductive: tuning learning rates and schedules alone outperforms the full search (p = 0.001). For natural language, architecture changes drive 81% of improvement (p = 0.009). Proteins fall between the two. Surprisingly, although the agent discovers distinct architectures per domain (p = 0.004), every innovation transfers across all three domains with <1% degradation, indicating that the differences reflect search-path dependence rather than fundamental biological requirements. We release a decision framework and open-source toolkit for molecular modeling teams to choose between autonomous architecture search and simple hyperparameter tuning. Comments: S...