[2511.12834] SAGA: Source Attribution of Generative AI Videos
About this article
Abstract page for arXiv paper 2511.12834: SAGA: Source Attribution of Generative AI Videos
Computer Science > Computer Vision and Pattern Recognition arXiv:2511.12834 (cs) [Submitted on 16 Nov 2025 (v1), last revised 2 Apr 2026 (this version, v2)] Title:SAGA: Source Attribution of Generative AI Videos Authors:Rohit Kundu, Vishal Mohanty, Hao Xiong, Shan Jia, Athula Balachandran, Amit K. Roy-Chowdhury View a PDF of the paper titled SAGA: Source Attribution of Generative AI Videos, by Rohit Kundu and 5 other authors View PDF HTML (experimental) Abstract:The proliferation of generative AI has led to hyper-realistic synthetic videos, escalating misuse risks and outstripping binary real/fake detectors. We introduce SAGA (Source Attribution of Generative AI videos), the first comprehensive framework to address the urgent need for AI-generated video source attribution at a large scale. Unlike traditional detection, SAGA identifies the specific generative model used. It uniquely provides multi-granular attribution across five levels: authenticity, generation task (e.g., T2V/I2V), model version, development team, and the precise generator, offering far richer forensic insights. Our novel video transformer architecture, leveraging features from a robust vision foundation model, effectively captures spatio-temporal artifacts. Critically, we introduce a data-efficient pretrain-and-attribute strategy, enabling SAGA to achieve state-of-the-art attribution using only 0.5\% of source-labeled data per class, matching fully supervised performance. Furthermore, we propose Temporal...