[2604.01117] Reconsidering Dependency Networks from an Information Geometry Perspective
About this article
Abstract page for arXiv paper 2604.01117: Reconsidering Dependency Networks from an Information Geometry Perspective
Computer Science > Machine Learning arXiv:2604.01117 (cs) [Submitted on 1 Apr 2026] Title:Reconsidering Dependency Networks from an Information Geometry Perspective Authors:Kazuya Takabatake, Shotaro Akaho View a PDF of the paper titled Reconsidering Dependency Networks from an Information Geometry Perspective, by Kazuya Takabatake and 1 other authors View PDF HTML (experimental) Abstract:Dependency networks (Heckerman et al., 2000) provide a flexible framework for modeling complex systems with many variables by combining independently learned local conditional distributions through pseudo-Gibbs sampling. Despite their computational advantages over Bayesian and Markov networks, the theoretical foundations of dependency networks remain incomplete, primarily because their model distributions -- defined as stationary distributions of pseudo-Gibbs sampling -- lack closed-form expressions. This paper develops an information-geometric analysis of pseudo-Gibbs sampling, interpreting each sampling step as an m-projection onto a full conditional manifold. Building on this interpretation, we introduce the full conditional divergence and derive an upper bound that characterizes the location of the stationary distribution in the space of probability distributions. We then reformulate both structure and parameter learning as optimization problems that decompose into independent subproblems for each node, and prove that the learned model distribution converges to the true underlying dis...