[2602.13958] Chemical Language Models for Natural Products: A State-Space Model Approach
Summary
This article presents a novel approach to chemical language models specifically for natural products, showcasing the effectiveness of state-space models compared to traditional transformer models in molecular generation and property prediction.
Why It Matters
Natural products are crucial in drug discovery, yet they are often overlooked in computational chemistry. This research highlights the potential of specialized models to enhance molecular generation and property prediction, which could lead to significant advancements in pharmaceutical development.
Key Takeaways
- Developed NP-specific chemical language models outperform traditional models in generating valid and unique molecules.
- State-space models show comparable or superior performance in property prediction compared to larger datasets.
- The study emphasizes the importance of domain-specific pre-training for effective model performance.
Computer Science > Machine Learning arXiv:2602.13958 (cs) [Submitted on 15 Feb 2026] Title:Chemical Language Models for Natural Products: A State-Space Model Approach Authors:Ho-Hsuan Wang, Afnan Sultan, Andrea Volkamer, Dietrich Klakow View a PDF of the paper titled Chemical Language Models for Natural Products: A State-Space Model Approach, by Ho-Hsuan Wang and 3 other authors View PDF HTML (experimental) Abstract:Language models are widely used in chemistry for molecular property prediction and small-molecule generation, yet Natural Products (NPs) remain underexplored despite their importance in drug discovery. To address this gap, we develop NP-specific chemical language models (NPCLMs) by pre-training state-space models (Mamba and Mamba-2) and comparing them with transformer baselines (GPT). Using a dataset of about 1M NPs, we present the first systematic comparison of selective state-space models and transformers for NP-focused tasks, together with eight tokenization strategies including character-level, Atom-in-SMILES (AIS), byte-pair encoding (BPE), and NP-specific BPE. We evaluate molecule generation (validity, uniqueness, novelty) and property prediction (membrane permeability, taste, anti-cancer activity) using MCC and AUC-ROC. Mamba generates 1-2 percent more valid and unique molecules than Mamba-2 and GPT, with fewer long-range dependency errors, while GPT yields slightly more novel structures. For property prediction, Mamba variants outperform GPT by 0.02-0.0...