[2602.22289] What Topological and Geometric Structure Do Biological Foundation Models Learn? Evidence from 141 Hypotheses
Summary
The paper investigates the geometric and topological structures learned by biological foundation models, analyzing 141 hypotheses through AI-driven methods.
Why It Matters
Understanding the geometric and topological structures in biological models is crucial for validating their biological relevance and improving their accuracy in gene expression analysis. This research contributes to the field of computational biology and machine learning by providing insights into model behavior and potential applications in genomics.
Key Takeaways
- Biological foundation models learn genuine geometric structures in gene expression data.
- Shared structures across models indicate consistency but differ in gene placement.
- Robust signals are localized, particularly in immune tissue, highlighting the need for targeted analysis.
Quantitative Biology > Quantitative Methods arXiv:2602.22289 (q-bio) [Submitted on 25 Feb 2026] Title:What Topological and Geometric Structure Do Biological Foundation Models Learn? Evidence from 141 Hypotheses Authors:Ihor Kendiukhov View a PDF of the paper titled What Topological and Geometric Structure Do Biological Foundation Models Learn? Evidence from 141 Hypotheses, by Ihor Kendiukhov View PDF HTML (experimental) Abstract:When biological foundation models such as scGPT and Geneformer process single-cell gene expression, what geometric and topological structure forms in their internal representations? Is that structure biologically meaningful or a training artifact, and how confident should we be in such claims? We address these questions through autonomous large-scale hypothesis screening: an AI-driven executor-brainstormer loop that proposed, tested, and refined 141 geometric and topological hypotheses across 52 iterations, covering persistent homology, manifold distances, cross-model alignment, community structure, and directed topology, all with explicit null controls and disjoint gene-pool splits. Three principal findings emerge. First, the models learn genuine geometric structure. Gene embedding neighborhoods exhibit non-trivial topology, with persistent homology significant in 11 of 12 transformer layers at p < 0.05 in the weakest domain and 12 of 12 in the other two. A multi-level distance hierarchy shows that manifold-aware metrics outperform Euclidean dista...