[2512.10720] Beyond the Black Box: Identifiable Interpretation and Control in Generative Models via Causal Minimality
About this article
Abstract page for arXiv paper 2512.10720: Beyond the Black Box: Identifiable Interpretation and Control in Generative Models via Causal Minimality
Computer Science > Machine Learning arXiv:2512.10720 (cs) [Submitted on 11 Dec 2025 (v1), last revised 2 Apr 2026 (this version, v2)] Title:Beyond the Black Box: Identifiable Interpretation and Control in Generative Models via Causal Minimality Authors:Lingjing Kong, Shaoan Xie, Guangyi Chen, Yuewen Sun, Xiangchen Song, Eric P. Xing, Kun Zhang View a PDF of the paper titled Beyond the Black Box: Identifiable Interpretation and Control in Generative Models via Causal Minimality, by Lingjing Kong and 6 other authors View PDF Abstract:Deep generative models, while revolutionizing fields like image and text generation, largely operate as opaque ``black boxes'', hindering human understanding, control, and alignment. While methods like sparse autoencoders (SAEs) show remarkable empirical success, they often lack theoretical guarantees, risking subjective insights. Our primary objective is to establish a principled foundation for interpretable generative models. We demonstrate that the principle of causal minimality -- favoring the simplest causal explanation -- can endow the latent representations of modern generative models with clear causal interpretation and robust, component-wise identifiable control. We introduce a novel theoretical framework for hierarchical selection models, where higher-level concepts emerge from the constrained composition of lower-level variables, better capturing the complex dependencies in data generation. Under theoretically derived minimality condi...