[2506.10914] Foundation Models for Causal Inference via Prior-Data Fitted Networks

[2506.10914] Foundation Models for Causal Inference via Prior-Data Fitted Networks

arXiv - Machine Learning 4 min read Article

Summary

This paper introduces CausalFM, a framework for training prior-data fitted networks (PFNs) for causal inference, enhancing Bayesian inference capabilities in various settings.

Why It Matters

Causal inference is critical in fields like medicine and economics. CausalFM represents a significant advancement by integrating foundation models with causal analysis, potentially transforming how practitioners approach causal inference tasks.

Key Takeaways

  • CausalFM utilizes prior-data fitted networks for effective causal inference.
  • The framework formalizes Bayesian priors based on structural causal models.
  • CausalFM demonstrates competitive performance in in-context learning compared to specialized models.
  • It offers a novel approach for causal inference applicable in various disciplines.
  • The framework has the potential to change standard practices in causal analysis.

Computer Science > Machine Learning arXiv:2506.10914 (cs) [Submitted on 12 Jun 2025 (v1), last revised 24 Feb 2026 (this version, v3)] Title:Foundation Models for Causal Inference via Prior-Data Fitted Networks Authors:Yuchen Ma, Dennis Frauen, Emil Javurek, Stefan Feuerriegel View a PDF of the paper titled Foundation Models for Causal Inference via Prior-Data Fitted Networks, by Yuchen Ma and 3 other authors View PDF HTML (experimental) Abstract:Prior-data fitted networks (PFNs) have recently been proposed as a promising way to train tabular foundation models. PFNs are transformers that are pre-trained on synthetic data generated from a prespecified prior distribution and that enable Bayesian inference through in-context learning. In this paper, we introduce CausalFM, a comprehensive framework for training PFN-based foundation models in various causal inference settings. First, we formalize the construction of Bayesian priors for causal inference based on structural causal models (SCMs) in a principled way and derive necessary criteria for the validity of such priors. Building on this, we propose a novel family of prior distributions using causality-inspired Bayesian neural networks that enable CausalFM to perform Bayesian causal inference in various settings, including for back-door, front-door, and instrumental variable adjustment. Finally, we instantiate CausalFM and explicitly train models to perform in-context learning in these settings. We show that CausalFM achieve...

Related Articles

Llms

A robot car with a Claude AI brain started a YouTube vlog about its own existence

Not a demo reel. Not a tutorial. A robot narrating its own experience — debugging, falling off shelves, questioning its identity. First-p...

Reddit - Artificial Intelligence · 1 min ·
Llms

Study: LLMs Able to De-Anonymize User Accounts on Reddit, Hacker News & Other "Pseudonymous" Platforms; Report Co-Author Expands, Advises

Advice from the study's co-author: "Be aware that it’s not any single post that identifies you, but the combination of small details acro...

Reddit - Artificial Intelligence · 1 min ·
Llms

do you guys actually trust AI tools with your data?

idk if it’s just me but lately i’ve been thinking about how casually we use stuff like chatgpt and claude for everything like coding, ran...

Reddit - Artificial Intelligence · 1 min ·
Llms

[P] Remote sensing foundation models made easy to use.

This project enables the idea of tasking remote sensing models to acquire embeddings like we task satellites to acquire data! https://git...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime