[2512.11919] A fine-grained look at causal effects in causal spaces

[2512.11919] A fine-grained look at causal effects in causal spaces

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2512.11919: A fine-grained look at causal effects in causal spaces

Statistics > Methodology arXiv:2512.11919 (stat) [Submitted on 11 Dec 2025 (v1), last revised 4 Apr 2026 (this version, v3)] Title:A fine-grained look at causal effects in causal spaces Authors:Junhyung Park, Yuqing Zhou View a PDF of the paper titled A fine-grained look at causal effects in causal spaces, by Junhyung Park and 1 other authors View PDF Abstract:The notion of causal effect is fundamental across many scientific disciplines. Traditionally, quantitative researchers have studied causal effects at the level of variables; for example, how a certain drug dose (W) causally affects a patient's blood pressure (Y). However, in many modern data domains, the raw variables-such as pixels in an image or tokens in a language model-do not have the semantic structure needed to formulate meaningful causal questions. In this paper, we offer a more fine-grained perspective by studying causal effects at the level of events, drawing inspiration from probability theory, where core notions such as independence are first given for events and sigma-algebras, before random variables enter the picture. Within the measure-theoretic framework of causal spaces, a recently introduced axiomatisation of causality, we first introduce several binary definitions that determine whether a causal effect is present, as well as proving some properties of them linking causal effect to (in)dependence under an intervention measure. Further, we provide quantifying measures that capture the strength and n...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

When Robots Have Their ChatGPT Moment, Remember These Pincers | WIRED
Llms

When Robots Have Their ChatGPT Moment, Remember These Pincers | WIRED

From sorting chicken nuggets to screwing in light bulbs, Eka’s robots are eerily lifelike. But do they have real physical smarts?

Wired - AI · 13 min ·
Llms

87% Cost Savings & Sub-3s Latency: I built a "Warm-Cache" harness for persistent Claude agents.

**The "Goldfish Problem" is expensive. I decided to fix the plumbing.** Most Claude implementations leave 90% of their money on the table...

Reddit - Artificial Intelligence · 1 min ·
Llms

What are people using for low-latency autocomplete in production? [P]

I’ve been looking into autocomplete/typeahead systems recently, especially in contexts where latency really matters (e.g. search-as-you-t...

Reddit - Machine Learning · 1 min ·
General Motors is adding Gemini to four million cars | The Verge
Llms

General Motors is adding Gemini to four million cars | The Verge

General Motors is planning to bring Google’s Gemini AI assistant to around four million vehicles across the US.

The Verge - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime