[2603.02697] ShareVerse: Multi-Agent Consistent Video Generation for Shared World Modeling
About this article
Abstract page for arXiv paper 2603.02697: ShareVerse: Multi-Agent Consistent Video Generation for Shared World Modeling
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.02697 (cs) [Submitted on 3 Mar 2026] Title:ShareVerse: Multi-Agent Consistent Video Generation for Shared World Modeling Authors:Jiayi Zhu, Jianing Zhang, Yiying Yang, Wei Cheng, Xiaoyun Yuan View a PDF of the paper titled ShareVerse: Multi-Agent Consistent Video Generation for Shared World Modeling, by Jiayi Zhu and 4 other authors View PDF HTML (experimental) Abstract:This paper presents ShareVerse, a video generation framework enabling multi-agent shared world modeling, addressing the gap in existing works that lack support for unified shared world construction with multi-agent interaction. ShareVerse leverages the generation capability of large video models and integrates three key innovations: 1) A dataset for large-scale multi-agent interactive world modeling is built on the CARLA simulation platform, featuring diverse scenes, weather conditions, and interactive trajectories with paired multi-view videos (front/ rear/ left/ right views per agent) and camera data. 2) We propose a spatial concatenation strategy for four-view videos of independent agents to model a broader environment and to ensure internal multi-view geometric consistency. 3) We integrate cross-agent attention blocks into the pretrained video model, which enable interactive transmission of spatial-temporal information across agents, guaranteeing shared world consistency in overlapping regions and reasonable generation in non-overlapp...