[2603.25796] Beyond identifiability: Learning causal representations with few environments and finite samples
About this article
Abstract page for arXiv paper 2603.25796: Beyond identifiability: Learning causal representations with few environments and finite samples
Statistics > Machine Learning arXiv:2603.25796 (stat) [Submitted on 26 Mar 2026] Title:Beyond identifiability: Learning causal representations with few environments and finite samples Authors:Inbeom Lee, Tongtong Jin, Bryon Aragam View a PDF of the paper titled Beyond identifiability: Learning causal representations with few environments and finite samples, by Inbeom Lee and 2 other authors View PDF HTML (experimental) Abstract:We provide explicit, finite-sample guarantees for learning causal representations from data with a sublinear number of environments. Causal representation learning seeks to provide a rigourous foundation for the general representation learning problem by bridging causal models with latent factor models in order to learn interpretable representations with causal semantics. Despite a blossoming theory of identifiability in causal representation learning, estimation and finite-sample bounds are less well understood. We show that causal representations can be learned with only a logarithmic number of unknown, multi-node interventions, and that the intervention targets need not be carefully designed in advance. Through a careful perturbation analysis, we provide a new analysis of this problem that guarantees consistent recovery of (a) the latent causal graph, (b) the mixing matrix and representations, and (c) \emph{unknown} intervention targets. Subjects: Machine Learning (stat.ML); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Statistics Th...