[2603.24753] Light Cones For Vision: Simple Causal Priors For Visual Hierarchy
About this article
Abstract page for arXiv paper 2603.24753: Light Cones For Vision: Simple Causal Priors For Visual Hierarchy
Computer Science > Machine Learning arXiv:2603.24753 (cs) [Submitted on 25 Mar 2026] Title:Light Cones For Vision: Simple Causal Priors For Visual Hierarchy Authors:Manglam Kartik, Neel Tushar Shah View a PDF of the paper titled Light Cones For Vision: Simple Causal Priors For Visual Hierarchy, by Manglam Kartik and Neel Tushar Shah View PDF HTML (experimental) Abstract:Standard vision models treat objects as independent points in Euclidean space, unable to capture hierarchical structure like parts within wholes. We introduce Worldline Slot Attention, which models objects as persistent trajectories through spacetime worldlines, where each object has multiple slots at different hierarchy levels sharing the same spatial position but differing in temporal coordinates. This architecture consistently fails without geometric structure: Euclidean worldlines achieve 0.078 level accuracy, below random chance (0.33), while Lorentzian worldlines achieve 0.479-0.661 across three datasets: a 6x improvement replicated over 20+ independent runs. Lorentzian geometry also outperforms hyperbolic embeddings showing visual hierarchies require causal structure (temporal dependency) rather than tree structure (radial branching). Our results demonstrate that hierarchical object discovery requires geometric structure encoding asymmetric causality, an inductive bias absent from Euclidean space but natural to Lorentzian light cones, achieved with only 11K parameters. The code is available at: this ...