[2603.03756] MOOSE-Star: Unlocking Tractable Training for Scientific Discovery by Breaking the Complexity Barrier
About this article
Abstract page for arXiv paper 2603.03756: MOOSE-Star: Unlocking Tractable Training for Scientific Discovery by Breaking the Complexity Barrier
Computer Science > Machine Learning arXiv:2603.03756 (cs) [Submitted on 4 Mar 2026] Title:MOOSE-Star: Unlocking Tractable Training for Scientific Discovery by Breaking the Complexity Barrier Authors:Zonglin Yang, Lidong Bing View a PDF of the paper titled MOOSE-Star: Unlocking Tractable Training for Scientific Discovery by Breaking the Complexity Barrier, by Zonglin Yang and 1 other authors View PDF HTML (experimental) Abstract:While large language models (LLMs) show promise in scientific discovery, existing research focuses on inference or feedback-driven training, leaving the direct modeling of the generative reasoning process, $P(\text{hypothesis}|\text{background})$ ($P(h|b)$), unexplored. We demonstrate that directly training $P(h|b)$ is mathematically intractable due to the combinatorial complexity ($O(N^k)$) inherent in retrieving and composing inspirations from a vast knowledge base. To break this barrier, we introduce MOOSE-Star, a unified framework enabling tractable training and scalable inference. In the best case, MOOSE-Star reduces complexity from exponential to logarithmic ($O(\log N)$) by (1) training on decomposed subtasks derived from the probabilistic equation of discovery, (2) employing motivation-guided hierarchical search to enable logarithmic retrieval and prune irrelevant subspaces, and (3) utilizing bounded composition for robustness against retrieval noise. To facilitate this, we release TOMATO-Star, a dataset of 108,717 decomposed papers (38,400 ...