[2506.10941] VINCIE: Unlocking In-context Image Editing from Video
About this article
Abstract page for arXiv paper 2506.10941: VINCIE: Unlocking In-context Image Editing from Video
Computer Science > Computer Vision and Pattern Recognition arXiv:2506.10941 (cs) [Submitted on 12 Jun 2025 (v1), last revised 28 Feb 2026 (this version, v2)] Title:VINCIE: Unlocking In-context Image Editing from Video Authors:Leigang Qu, Feng Cheng, Ziyan Yang, Qi Zhao, Shanchuan Lin, Yichun Shi, Yicong Li, Wenjie Wang, Tat-Seng Chua, Lu Jiang View a PDF of the paper titled VINCIE: Unlocking In-context Image Editing from Video, by Leigang Qu and 9 other authors View PDF HTML (experimental) Abstract:In-context image editing aims to modify images based on a contextual sequence comprising text and previously generated images. Existing methods typically depend on task-specific pipelines and expert models (e.g., segmentation and inpainting) to curate training data. In this work, we explore whether an in-context image editing model can be learned directly from videos. We introduce a scalable approach to annotate videos as interleaved multimodal sequences. To effectively learn from this data, we design a block-causal diffusion transformer trained on three proxy tasks: next-image prediction, current segmentation prediction, and next-segmentation prediction. Additionally, we propose a novel multi-turn image editing benchmark to advance research in this area. Extensive experiments demonstrate that our model exhibits strong in-context image editing capabilities and achieves state-of-the-art results on two multi-turn image editing benchmarks. Despite being trained exclusively on video...