[2507.06547] Concept-TRAK: Understanding how diffusion models learn concepts through concept-level attribution
About this article
Abstract page for arXiv paper 2507.06547: Concept-TRAK: Understanding how diffusion models learn concepts through concept-level attribution
Computer Science > Computer Vision and Pattern Recognition arXiv:2507.06547 (cs) [Submitted on 9 Jul 2025 (v1), last revised 2 Mar 2026 (this version, v3)] Title:Concept-TRAK: Understanding how diffusion models learn concepts through concept-level attribution Authors:Yonghyun Park, Chieh-Hsin Lai, Satoshi Hayakawa, Yuhta Takida, Naoki Murata, Wei-Hsiang Liao, Woosung Choi, Kin Wai Cheuk, Junghyun Koo, Yuki Mitsufuji View a PDF of the paper titled Concept-TRAK: Understanding how diffusion models learn concepts through concept-level attribution, by Yonghyun Park and 9 other authors View PDF HTML (experimental) Abstract:While diffusion models excel at image generation, their growing adoption raises critical concerns about copyright issues and model transparency. Existing attribution methods identify training examples influencing an entire image, but fall short in isolating contributions to specific elements, such as styles or objects, that are of primary concern to stakeholders. To address this gap, we introduce concept-level attribution through a novel method called Concept-TRAK, which extends influence functions with a key innovation: specialized training and utility loss functions designed to isolate concept-specific influences rather than overall reconstruction quality. We evaluate Concept-TRAK on novel concept attribution benchmarks using Synthetic and CelebA-HQ datasets, as well as the established AbC benchmark, showing substantial improvements over prior methods in con...