[2402.06223] Beyond DAGs: A Latent Partial Causal Model for Multimodal Learning
About this article
Abstract page for arXiv paper 2402.06223: Beyond DAGs: A Latent Partial Causal Model for Multimodal Learning
Computer Science > Machine Learning arXiv:2402.06223 (cs) [Submitted on 9 Feb 2024 (v1), last revised 27 Feb 2026 (this version, v3)] Title:Beyond DAGs: A Latent Partial Causal Model for Multimodal Learning Authors:Yuhang Liu, Zhen Zhang, Dong Gong, Erdun Gao, Biwei Huang, Mingming Gong, Anton van den Hengel, Kun Zhang, Javen Qinfeng Shi View a PDF of the paper titled Beyond DAGs: A Latent Partial Causal Model for Multimodal Learning, by Yuhang Liu and 8 other authors View PDF Abstract:Directed Acyclic Graphs (DAGs) are a standard tool in causal modeling, but their suitability for capturing the complexity of large-scale multimodal data is questionable. In practice, real-world multimodal datasets are often collected from heterogeneous generative processes that do not conform to a single DAG. Instead, they may involve multiple, and even opposing, DAG structures with inverse causal directions. To address this gap, in this work, we first propose a novel latent partial causal model tailored for multimodal data representation learning, featuring two latent coupled variables parts connected by an undirected edge, to represent the transfer of knowledge across modalities. Under specific statistical assumptions, we establish an identifiability result, demonstrating that representations learned by MultiModal Contrastive Learning (MMCL) correspond to the latent coupled variables up to a trivial transformation. This result deepens our understanding of the why MMCL works, highlights its...