[2602.18266] A Probabilistic Framework for LLM-Based Model Discovery

[2602.18266] A Probabilistic Framework for LLM-Based Model Discovery

arXiv - Machine Learning 3 min read Article

Summary

This article presents a probabilistic framework for discovering mechanistic models using large language models (LLMs), introducing an algorithm called ModelSMC that enhances model proposal and refinement through probabilistic inference.

Why It Matters

The framework addresses the limitations of existing LLM-based model discovery methods, which often rely on heuristic procedures. By framing model discovery as probabilistic inference, it offers a more systematic approach that can lead to better scientific insights and model interpretability.

Key Takeaways

  • Model discovery can be enhanced by framing it as probabilistic inference.
  • The ModelSMC algorithm uses Sequential Monte Carlo sampling for model refinement.
  • This approach improves the interpretability of discovered models.
  • Experiments demonstrate better performance in real-world scientific systems.
  • The framework provides a unified perspective for developing LLM-based discovery methods.

Computer Science > Machine Learning arXiv:2602.18266 (cs) [Submitted on 20 Feb 2026] Title:A Probabilistic Framework for LLM-Based Model Discovery Authors:Stefan Wahl, Raphaela Schenk, Ali Farnoud, Jakob H. Macke, Daniel Gedon View a PDF of the paper titled A Probabilistic Framework for LLM-Based Model Discovery, by Stefan Wahl and 4 other authors View PDF HTML (experimental) Abstract:Automated methods for discovering mechanistic simulator models from observational data offer a promising path toward accelerating scientific progress. Such methods often take the form of agentic-style iterative workflows that repeatedly propose and revise candidate models by imitating human discovery processes. However, existing LLM-based approaches typically implement such workflows via hand-crafted heuristic procedures, without an explicit probabilistic formulation. We recast model discovery as probabilistic inference, i.e., as sampling from an unknown distribution over mechanistic models capable of explaining the data. This perspective provides a unified way to reason about model proposal, refinement, and selection within a single inference framework. As a concrete instantiation of this view, we introduce ModelSMC, an algorithm based on Sequential Monte Carlo sampling. ModelSMC represents candidate models as particles which are iteratively proposed and refined by an LLM, and weighted using likelihood-based criteria. Experiments on real-world scientific systems illustrate that this formulat...

Related Articles

Llms

The person who replaces you probably won't be AI. It'll be someone from the next department over who learned to use it - opinion/discussion

I'm a strategy person by background. Two years ago I'd write a recommendation and hand it to a product team. Now.. I describe what I want...

Reddit - Artificial Intelligence · 1 min ·
Block Resets Management With AI As Cash App Adds Installment Transfers
Llms

Block Resets Management With AI As Cash App Adds Installment Transfers

Block (NYSE:XYZ) plans a permanent organizational overhaul that replaces many middle management roles with AI-driven models to create fla...

AI Tools & Products · 5 min ·
Anthropic leaks source code for its AI coding agent Claude
Llms

Anthropic leaks source code for its AI coding agent Claude

Anthropic accidentally exposed roughly 512,000 lines of proprietary TypeScript source code for its AI-powered coding agent Claude Code

AI Tools & Products · 3 min ·
AI Desktop 98 lets you chat with Claude, ChatGPT, and Gemini through a Windows 98-inspired interface
Llms

AI Desktop 98 lets you chat with Claude, ChatGPT, and Gemini through a Windows 98-inspired interface

It even has Minesweeper.

AI Tools & Products · 3 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime